首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Controlling multiple multi-joint fish-like robots has long captivated the attention of engineers and biologists, for which a fundamental but challenging topic is to robustly track the postures of the individuals in real time. This requires detecting multiple robots, estimating multi-joint postures, and tracking identities, as well as processing fast in real time. To the best of our knowledge, this challenge has not been tackled in the previous studies. In this paper, to precisely track the planar postures of multiple swimming multi-joint fish-like robots in real time, we propose a novel deep neural network-based method, named TAB-IOL. Its TAB part fuses the top-down and bottom-up approaches for vision-based pose estimation, while the IOL part with long short-term memory considers the motion constraints among joints for precise pose tracking. The satisfying performance of our TAB-IOL is verified by testing on a group of freely swimming fish-like robots in various scenarios with strong disturbances and by a deed comparison of accuracy, speed, and robustness with most state-of-the-art algorithms. Further, based on the precise pose estimation and tracking realized by our TAB-IOL, several formation control experiments are conducted for the group of fish-like robots. The results clearly demonstrate that our TAB-IOL lays a solid foundation for the coordination control of multiple fish-like robots in a real working environment. We believe our proposed method will facilitate the growth and development of related fields.   相似文献   

2.
In this paper, we investigated an approach for robots to learn to adapt dance actions to human’s preferences through interaction and feedback. Human’s preferences were extracted by analysing the common action patterns with positive or negative feedback from the human during robot dancing. By using a buffering technique to store the dance actions before a feedback, each individual’s preferences can be extracted even when a reward is received late. The extracted preferred dance actions from different people were then combined to generate improved dance sequences, i.e. performing more of what was preferred and less of that was not preferred. Together with Softmax action-selection method, the Sarsa reinforcement learning algorithm was used as the underlining learning algorithm and to effectively control the trade-off between exploitation of the learnt dance skills and exploration of new dance actions. The results showed that the robot learnt, using interactive reinforcement learning, the preferences of human partners, and the dance improved with the extracted preferences from more human partners.  相似文献   

3.
段肖  马钢  危辉 《智能系统学报》2022,17(5):941-950
为了适应环境的复杂性和多样性,增强机器人抓取任务的鲁棒性,本文从3D目标跟踪算法出发,提出了一种实现机器人手眼协调的新方法。该方法采用改进的基于区域的位姿追踪算法同时跟踪机械臂夹持器和目标物体的位姿,根据二者的相对位置关系引导机械臂运动。对基于区域的位姿跟踪算法,本文提出根据局部区域分割线构建分割模型并改进模型颜色似然的线性更新方式,使得算法能够准确跟踪机械臂夹持器与目标物体。基于ROS平台搭建了一套仿真实验环境,并分别在仿真环境和真实环境下验证了此手眼协调系统的有效性和鲁棒性。这种方式不仅不需要手眼标定,更接近于人类“Sensor-Actor”带反馈的闭环控制方式,同时赋予了机器人足够的灵活性来应对弹性的任务和多变的环境。  相似文献   

4.
A lattice-based MRF model for dynamic near-regular texture tracking   总被引:1,自引:0,他引:1  
A near-regular texture (NRT) is a geometric and photometric deformation from its regular origin - a congruent wallpaper pattern formed by 2D translations of a single tile. A dynamic NRT is an NRT under motion. Although NRTs are pervasive in man-made and natural environments, effective computational algorithms for NRTs are few. This paper addresses specific computational challenges in modeling and tracking dynamic NRTs, including ambiguous correspondences, occlusions, and drastic illumination and appearance variations. We propose a lattice-based Markov-random-field (MRF) model for dynamic NRTs in a 3D spatiotemporal space. Our model consists of a global lattice structure that characterizes the topological constraint among multiple textons and an image observation model that handles local geometry and appearance variations. Based on the proposed MRF model, we develop a tracking algorithm that utilizes belief propagation and particle filtering to effectively handle the special challenges of the dynamic NRT tracking without any assumption on the motion types or lighting conditions. We provide quantitative evaluations of the proposed method against existing tracking algorithms and demonstrate its applications in video editing  相似文献   

5.
Robots are increasingly present in our lives, sharing the workspace and tasks with human co-workers. However, existing interfaces for human-robot interaction / cooperation (HRI/C) have limited levels of intuitiveness to use and safety is a major concern when humans and robots share the same workspace. Many times, this is due to the lack of a reliable estimation of the human pose in space which is the primary input to calculate the human-robot minimum distance (required for safety and collision avoidance) and HRI/C featuring machine learning algorithms classifying human behaviours / gestures. Each sensor type has its own characteristics resulting in problems such as occlusions (vision) and drift (inertial) when used in an isolated fashion. In this paper, it is proposed a combined system that merges the human tracking provided by a 3D vision sensor with the pose estimation provided by a set of inertial measurement units (IMUs) placed in human body limbs. The IMUs compensate the gaps in occluded areas to have tracking continuity. To mitigate the lingering effects of the IMU offset we propose a continuous online calculation of the offset value. Experimental tests were designed to simulate human motion in a human-robot collaborative environment where the robot moves away to avoid unexpected collisions with de human. Results indicate that our approach is able to capture the human’s position, for example the forearm, with a precision in the millimetre range and robustness to occlusions.  相似文献   

6.
基于计算机视觉的人运动捕获综述   总被引:3,自引:0,他引:3  
人体运动的捕获是计算机视觉领域的热点之一,它是指从大范围上从图像序列中提取并描述人体轮廓的运动,然后对其进行跟踪识别。介绍了运动捕获的潜在应用,从初始化、检测、跟踪、姿态评估、识别5个方面分析了有关运动捕获的分类及研究现状,最后简要探讨了该领域面临的难点问题及发展趋势。  相似文献   

7.
A system for learning statistical motion patterns   总被引:3,自引:0,他引:3  
Analysis of motion patterns is an effective approach for anomaly detection and behavior prediction. Current approaches for the analysis of motion patterns depend on known scenes, where objects move in predefined ways. It is highly desirable to automatically construct object motion patterns which reflect the knowledge of the scene. In this paper, we present a system for automatically learning motion patterns for anomaly detection and behavior prediction based on a proposed algorithm for robustly tracking multiple objects. In the tracking algorithm, foreground pixels are clustered using a fast accurate fuzzy k-means algorithm. Growing and prediction of the cluster centroids of foreground pixels ensure that each cluster centroid is associated with a moving object in the scene. In the algorithm for learning motion patterns, trajectories are clustered hierarchically using spatial and temporal information and then each motion pattern is represented with a chain of Gaussian distributions. Based on the learned statistical motion patterns, statistical methods are used to detect anomalies and predict behaviors. Our system is tested using image sequences acquired, respectively, from a crowded real traffic scene and a model traffic scene. Experimental results show the robustness of the tracking algorithm, the efficiency of the algorithm for learning motion patterns, and the encouraging performance of algorithms for anomaly detection and behavior prediction.  相似文献   

8.
随着深度学习技术引入视觉目标跟踪领域,目标跟踪算法的精度和鲁棒性有了很大的提高。但在低空无人机跟踪目标的实际场景中,情况比较复杂,如相机的抖动、大量的遮挡、视角和焦距的改变等,使得跟踪算法的准确性受到极大挑战。目前的算法大多建立在目标外观变化缓慢的前提假设下,在跟踪的过程中不具备检测和修复漂移(跟踪误差)的能力。针对该问题,提出了一种基于多尺度建议框的目标跟踪误差修正方法。离线阶段,利用大量的已标注的目标样本训练基于多尺度建议框的目标跟踪修正模型,获取不同类别目标的先验知识。在线阶段在核相关滤波跟踪的基础上,依据相关响应置信度自适应评价的结果,通过目标跟踪修正模型不定期重新初始化目标的位置,避免了因为误差累积而导致跟踪失败。算法在无人机航拍数据集上进行了测试,结果表明,该跟踪算法在目标发生较大形变的情况下能较好的修正跟踪漂移问题。相比于其他几种算法,目标跟踪的成功率和精度分别提高了14.3%和3.1%。  相似文献   

9.
Multisensor-Based Human Detection and Tracking for Mobile Service Robots   总被引:2,自引:0,他引:2  
One of fundamental issues for service robots is human-robot interaction. In order to perform such a task and provide the desired services, these robots need to detect and track people in the surroundings. In this paper, we propose a solution for human tracking with a mobile robot that implements multisensor data fusion techniques. The system utilizes a new algorithm for laser-based leg detection using the onboard laser range finder (LRF). The approach is based on the recognition of typical leg patterns extracted from laser scans, which are shown to also be very discriminative in cluttered environments. These patterns can be used to localize both static and walking persons, even when the robot moves. Furthermore, faces are detected using the robot's camera, and the information is fused to the legs' position using a sequential implementation of unscented Kalman filter. The proposed solution is feasible for service robots with a similar device configuration and has been successfully implemented on two different mobile platforms. Several experiments illustrate the effectiveness of our approach, showing that robust human tracking can be performed within complex indoor environments.  相似文献   

10.
The constrained motion control is one of the most common control tasks found in many industrial robot applications. The nonlinear and nonclassical nature of the dynamic model of constrained robots make designing a controller for accurate tracking of both motion and force a difficult problem. In this article, a discrete-time learning control problem for precise path tracking of motion and force for constrained robots is formulated and solved. The control system is able to reduce the tracking error iteratively in the presence of external disturbances and errors in initial condition as the robot repeats its action. Computer simulation result is presented to demonstrate the performance of the proposed learning controller. © 1994 John Wiley & Sons, Inc.  相似文献   

11.
Robust online appearance models for visual tracking   总被引:11,自引:0,他引:11  
We propose a framework for learning robust, adaptive, appearance models to be used for motion-based tracking of natural objects. The model adapts to slowly changing appearance, and it maintains a natural measure of the stability of the observed image structure during tracking. By identifying stable properties of appearance, we can weight them more heavily for motion estimation, while less stable properties can be proportionately downweighted. The appearance model involves a mixture of stable image structure, learned over long time courses, along with two-frame motion information and an outlier process. An online EM-algorithm is used to adapt the appearance model parameters over time. An implementation of this approach is developed for an appearance model based on the filter responses from a steerable pyramid. This model is used in a motion-based tracking algorithm to provide robustness in the face of image outliers, such as those caused by occlusions, while adapting to natural changes in appearance such as those due to facial expressions or variations in 3D pose.  相似文献   

12.
Visual tracking is considered a common procedure in many real-time applications. Such systems are required to track objects under changes in illumination, dynamic viewing angle, image noise and occlusions (to name a few). But to maintain real-time performance despite these challenging conditions, tracking methods should require extremely low computational resources, therefore facing a trade-off between robustness and speed. Emergence of new consumer-level cameras capable of capturing video in 60 fps challenges this tradeoff even further. Unfortunately, state-of-the-art tracking techniques struggle to meet frame rates over 30 VGA-resolution fps with standard desktop power, let alone on typically-weaker mobile devices. In this paper we suggest a significantly cheaper computational method for tracking in colour video clips, that greatly improves tracking performance, in terms of robustness/speed trade-off. The suggested approach employs a novel similarity measure that explicitly combines appearance with object kinematics and a new adaptive Kalman filter extends the basic tracking to provide robustness to occlusions and noise. The linear time complexity of this method is reflected in computational efficiency and high processing rate. Comparisons with two recent trackers show superior tracking robustness at more than 5 times faster operation, all using naïve C/C++ implementation and built-in OpenCV functions.  相似文献   

13.
Automatically observing and understanding human activities is one of the big challenges in computer vision research. Among the potential fields of application are areas such as robotics, human computer interaction or medical research. In this article we present our work on unintrusive observation and interpretation of human activities for the precise recognition of human fullbody motions. The presented system requires no more than three cameras and is capable of tracking a large spectrum of motions in a wide variety of scenarios. This includes scenarios where the subject is partially occluded, where it manipulates objects as part of its activities, or where it interacts with the environment or other humans. Our system is self-training, i.e. it is capable of learning models of human motion over time. These are used both to improve the prediction of human dynamics and to provide the basis for the recognition and interpretation of observed activities. The accuracy and robustness obtained by our system is the combined result of several contributions. By taking an anthropometric human model and optimizing it towards use in a probabilistic tracking framework we obtain a detailed biomechanical representation of human shape, posture and motion. Furthermore, we introduce a sophisticated hierarchical sampling strategy for tracking that is embedded in a probabilistic framework and outperforms state-of-the-art Bayesian methods. We then show how to track complex manipulation activities in everyday environments using a combination of learned human appearance models and implicit environment models. Finally, we discuss a locally consistent representation of human motion that we use as a basis for learning environment- and task-specific motion models. All methods presented in this article have been subject to extensive experimental evaluation on today??s benchmarks and several challenging sequences ranging from athletic exercises to ergonomic case studies to everyday manipulation tasks in a kitchen environment.  相似文献   

14.
融合SPA遮挡分割的多目标跟踪方法   总被引:1,自引:0,他引:1       下载免费PDF全文
复杂环境下的多目标视频跟踪是计算机视觉领域的一个难点,有效处理目标间遮挡是解决多目标跟踪问题的关键。将运动分割方法引入目标跟踪领域,提出一种融合骨架点指派(SPA)遮挡分割的多目标跟踪方法。由底层光流信息得到骨架点,并估计骨架点遮挡状态;综合使用目标外观、运动、颜色信息等高级语义信息,将骨架点指派给各个目标;最后以骨架点为核,对运动前景密集分类,得到准确的目标前景像素;在粒子滤波器跟踪框架下,使用概率外观模型进行多目标跟踪。在PETS2009数据集上的实验结果表明,文中方法能够改进现有多目标跟踪方法对目标间交互适应性较差的缺点,更好地处理动态遮挡问题。  相似文献   

15.
高庆吉  霍璐  牛国臣 《计算机应用》2016,36(8):2311-2315
针对单目视觉对多个相似的目标跟踪因遮挡等因素影响而失效的问题,提出一种基于改进霍夫森林框架的多目标跟踪算法。在将多目标跟踪问题归结为基于目标检测的轨迹关联过程基础上,通过引入在线学习霍夫森林框架将轨迹关联计算转化为最大后验概率(MAP)问题。通过在线采集多目标样本、提取目标外观和运动特征构建霍夫森林,进行森林训练得到轨迹关联概率,从而关联多目标轨迹;而引入低秩逼近Hankel矩阵进行轨迹校验,修复了误匹配的轨迹,改进了在线更新训练样本算法的效能。实验表明,轨迹误匹配率显著改善,能有效提高单目摄像机对多个相似目标有遮挡情况下跟踪的准确性和鲁棒性。  相似文献   

16.
In this paper, we present a robust 3D human-head tracking method. 3D head positions are essential for robots interacting with people. Natural interaction behaviors such as making eye contacts require head positions. Past researches with laser range finder (LRF) have been successful in tracking 2D human position with high accuracy in real time. However, LRF trackers cannot track multiple 3D head positions. On the other hand, trackers with multi-viewpoint images can obtain 3D head position. However, vision-based trackers generally lack robustness and scalability, especially in open environments where lightening conditions vary by time. To achieve 3D robust real-time tracking, here we propose a new method that combines LRF tracker and multi-camera tracker. We combine the results from trackers using the LRF results as maintenance information toward multi-camera tracker. Through an experiment in a real environment, we show that our method outperforms toward existing methods, both in its robustness and scalability.  相似文献   

17.
Tracking pedestrians is a vital component of many computer vision applications, including surveillance, scene understanding, and behavior analysis. Videos of crowded scenes present significant challenges to tracking due to the large number of pedestrians and the frequent partial occlusions that they produce. The movement of each pedestrian, however, contributes to the overall crowd motion (i.e., the collective motions of the scene's constituents over the entire video) that exhibits an underlying spatially and temporally varying structured pattern. In this paper, we present a novel Bayesian framework for tracking pedestrians in videos of crowded scenes using a space-time model of the crowd motion. We represent the crowd motion with a collection of hidden Markov models trained on local spatio-temporal motion patterns, i.e., the motion patterns exhibited by pedestrians as they move through local space-time regions of the video. Using this unique representation, we predict the next local spatio-temporal motion pattern a tracked pedestrian will exhibit based on the observed frames of the video. We then use this prediction as a prior for tracking the movement of an individual in videos of extremely crowded scenes. We show that our approach of leveraging the crowd motion enables tracking in videos of complex scenes that present unique difficulty to other approaches.  相似文献   

18.
《Advanced Robotics》2013,27(4):405-428
Robots designed to interact socially with people require reliable estimates of human position and motion. Additional pose data such as body orientation may enable a robot to interact more effectively by providing a basis for inferring contextual social information such as people's intentions and relationships. To this end, we have developed a system for simultaneously tracking the position and body orientation of many people, using a network of laser range finders mounted at torso height. An individual particle filter is used to track the position and velocity of each human, and a parametric shape model representing the person's cross-sectional contour is fit to the observed data at each step. We demonstrate the system's tracking accuracy quantitatively in laboratory trials and we present results from a field experiment observing subjects walking through the lobby of a building. The results show that our method can closely track torso and arm movements, even with noisy and incomplete sensor data, and we present examples of social information observable from this orientation and positioning information that may be useful for social robots.  相似文献   

19.
Visual analysis of human motion is currently one of the most active research topics in computer vision. This strong interest is driven by a wide spectrum of promising applications in many areas such as virtual reality, smart surveillance, perceptual interface, etc. Human motion analysis concerns the detection, tracking and recognition of people, and more generally, the understanding of human behaviors, from image sequences involving humans. This paper provides a comprehensive survey of research on computer-vision-based human motion analysis. The emphasis is on three major issues involved in a general human motion analysis system, namely human detection, tracking and activity understanding. Various methods for each issue are discussed in order to examine the state of the art. Finally, some research challenges and future directions are discussed.  相似文献   

20.
基于改进粒子滤波算法的人体运动跟踪   总被引:1,自引:0,他引:1  
在复杂环境中对人体进行有效性和鲁棒性的跟踪是计算机视觉领域一个非常富有挑战性的课题,提出了一种基于改进粒子滤波算法实现的人体运动跟踪。利用改进的粒子滤波算法跟踪视频序列中的人体运动,不但解决了传统粒子滤波算法计算量大、误码多的缺点,而且能较好地处理遮挡和自遮挡问题。实验结果表明,该改进算法能更准确、更有效地跟踪运动人体。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号