首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
齐骥  王宇鹏  钟志 《计算机测量与控制》2016,24(6):189-191, 194
针对多无人机(Unmanned Aerial Vehicles, UAVs)协同控制问题,提出了一种UAVs多阶段航迹预测分布式任务规划方法;定义从一次任务分配开始到其中一项任务完成为一个任务周期;在每个规划周期,首先,各UAV使用A*算法快速预测到所有任务目标的路径,提供至任务分配;然后,采用聚类算法修改目标价值向量,协商分配结果,并实时计算探测范围内的最短路径;最后,采用三次B样条曲线平滑所分配的最短路径,在线规划出满足飞行约束的飞行航迹;通过仿真实验对算法的有效性进行了验证,结果表明,提出的算法能够实时获得近似最优的任务分配结果并规划出可飞行航迹,并有效处理突发任务。  相似文献   

2.
对无人机系统地面任务控制站与飞行器通信协议中的任务链进行了研究。根据功能将任务链分为航路计划、传感器计划、通信计划和控制权限交接计划4个部分,建立了能够支持复杂任务的任务链模型。采用XML语言对任务链进行表达,增强了任务链的标准性和通用性。最后实现了任务控制站通过任务链控制多架无人机协同执行任务的仿真。  相似文献   

3.
An iterative temporal registration algorithm is presented in this article for registering 3D range images obtained from unmanned ground and aerial vehicles traversing unstructured environments. We are primarily motivated by the development of 3D registration algorithms to overcome both the unavailability and unreliability of Global Positioning System (GPS) within required accuracy bounds for Unmanned Ground Vehicle (UGV) navigation. After suitable modifications to the well-known Iterative Closest Point (ICP) algorithm, the modified algorithm is shown to be robust to outliers and false matches during the registration of successive range images obtained from a scanning LAser Detection And Ranging (LADAR) rangefinder on the UGV. Towards registering LADAR images from the UGV with those from an Unmanned Aerial Vehicle (UAV) that flies over the terrain being traversed, we then propose a hybrid registration approach. In this approach to air to ground registration to estimate and update the position of the UGV, we register range data from two LADARs by combining a feature-based method with the aforementioned modified ICP algorithm. Registration of range data guarantees an estimate of the vehicle's position even when only one of the vehicles has GPS information. Temporal range registration enables position information to be continually maintained even when both vehicles can no longer maintain GPS contact. We present results of the registration algorithm in rugged terrain and urban environments using real field data acquired from two different LADARs on the UGV. ★Commercial equipment and materials are identified in this article in order to adequately specify certain procedures. Such identification does not imply recommendation or endorsement by the National Institute of Standards and Technology, nor does it imply that the materials or equipment identified are necessarily the best available for the purpose.  相似文献   

4.
Fielding robots in complex applications can stress the human operators responsible for supervising them, particularly because the operators might understand the applications but not the details of the robots. Our answer to this problem has been to insert agent technology between the operator and the robotic platforms. In this paper, we motivate the challenges in defining, developing, and deploying the agent technology that provides the glue in the application of tasking unmanned ground vehicles in a military setting. We describe how a particular suite of architectural components serves equally well to support the interactions between the operator, planning agents, and robotic agents, and how our approach allows autonomy during planning and execution of a mission to be allocated among the human and artificial agents. Our implementation and demonstrations (in real robots and simulations) of our multi-agent system shows significant promise for the integration of unmanned vehicles in military applications.  相似文献   

5.
针对无人驾驶车环境感知技术,基于D-S证据理论融合多传感器信息,旨在解决障碍物身份识别技术难点。基于CCD和激光传感器建立信息融合系统,并提取每种障碍物的5个特征证据,包括距离对比度特征、平行四边形特征、边缘形状特征、灰度纹理特征和颜色特征。再根据目标类型和环境加权系数选择经验公式,通过模糊插值法求取身份隶属度近似获得各特征对目标的相关系数构造基本概率赋值函数。最后制定Dempster组合规则,融合多传感器特征信息识别障碍身份。试验表明本文方法能够准确有效地获取基本概率赋值函数,D-S证据理论融合方法提高了障碍物身份识别的准确性和鲁棒性。  相似文献   

6.
针对四旋翼无人机无人车联合运动缺乏对系统成员姿态约束的问题,研究了一种基于模型预测控制(MPC)的分布式联合运动控制方法.基于虚拟结构法,使用虚拟领航者策略,以虚拟领航者提供参考轨迹及参考速度,分别在各无人器平台上转换成各自所需的预测时域信息,结合推导的各无人器的状态空间模型滚动优化实现预测控制.限定四旋翼高度方向运动状态与偏航角,构造以俯仰角、横滚角与重力加速度乘积为位置运动输入的状态空间模型,将无人机内环姿态控制约束加入位置运动,增强飞行稳定性.改良无人车状态空间模型,增加速度信息得到可提供位置速度追踪的增广状态空间模型,增强运动追踪能力.仿真表明在满足无人器姿态约束条件下,能够保证联合运动的位置速度精度.  相似文献   

7.
Cloud robotics is the application of cloud computing concepts to robotic systems. It utilizes modern cloud computing infrastructure to distribute computing resources and datasets. Cloud‐based real‐time outsourcing localization architecture is proposed in this paper to allow a ground mobile robot to identify its location relative to a road network map and reference images in the cloud. An update of the road network map is executed in the cloud, as is the extraction of the robot‐terrain inclination (RTI) model as well as reference image matching. A particle filter with a network‐delay‐compensation localization algorithm is executed on the mobile robot based on the local RTI model and the recognized location both of which are sent from the cloud. The proposed methods are tested in different challenging outdoor scenarios with a ground mobile robot equipped with minimal onboard hardware, where the longest trajectory was 13.1 km. Experimental results show that this method could be applicable to large‐scale outdoor environments for autonomous robots in real time.  相似文献   

8.
Horse locomotion exhibits rich variations in gaits and styles. Although there have been many approaches proposed for animating quadrupeds, there is not much research on synthesizing horse locomotion. In this paper, we present a horse locomotion synthesis approach. A user can arbitrarily change a horse's moving speed and direction, and our system would automatically adjust the horse's motion to fulfill the user's commands. At preprocessing, we manually capture horse locomotion data from Eadweard Muybridge's famous photographs of animal locomotion and expand the captured motion database to various speeds for each gait. At runtime, our approach automatically changes gaits based on speed, synthesizes the horse's root trajectory, and adjusts its body orientation based on the horse's turning direction. We propose an asynchronous time warping approach to handle gait transition, which is critical for generating realistic and controllable horse locomotion. Our experiments demonstrate that our system can produce smooth, rich, and controllable horse locomotion in real time. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

9.
We present a new real‐time approach to simulate deformable objects using a learnt statistical model to achieve a high degree of realism. Our approach improves upon state‐of‐the‐art interactive shape‐matching meshless simulation methods by not only capturing important nuances of an object's kinematics but also of its dynamic texture variation. We are able to achieve this in an automated pipeline from data capture to simulation. Our system allows for the capture of idiosyncratic characteristics of an object's dynamics which for many simulations (e.g. facial animation) is essential. We allow for the plausible simulation of mechanically complex objects without knowledge of their inner workings. The main idea of our approach is to use a flexible statistical model to achieve a geometrically‐driven simulation that allows for arbitrarily complex yet easily learned deformations while at the same time preserving the desirable properties (stability, speed and memory efficiency) of current shape‐matching simulation systems. The principal advantage of our approach is the ease with which a pseudo‐mechanical model can be learned from 3D scanner data to yield realistic animation. We present examples of non‐trivial biomechanical objects simulated on a desktop machine in real‐time, demonstrating superior realism over current geometrically motivated simulation techniques.  相似文献   

10.
Virtual characters in games and simulations often need to plan visually convincing paths through a crowded environment. This paper describes how crowd density information can be used to guide a large number of characters through a crowded environment. Crowd density information helps characters avoid congested routes that could lead to traffic jams. It also encourages characters to use a wide variety of routes to reach their destination. Our technique measures the desirability of a route by combining distance information with crowd density information. We start by building a navigation mesh for the walkable regions in a polygonal two‐dimensional (2‐D) or multilayered three‐dimensional (3‐D) environment. The skeleton of this navigation mesh is the medial axis. Each walkable region in the navigation mesh maintains an up‐to‐date density value. This density value is equal to the area occupied by all the characters inside a given region divided by the total area of this region. These density values are mapped onto the medial axis to form a weighted graph. An A* search on this graph yields a backbone path for each character, and forces are used to guide the characters through the weighted environment. The characters periodically replan their routes as the density values are updated. Our experiments show that we can compute congestion‐avoiding paths for tens of thousands of characters in real‐time. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

11.
Autonomous robots require accurate localizations and dense mappings for motion planning. We consider the navigation scenario where the dense representation of the robot surrounding must be immediately available, and require that the system is capable of an instantaneous map correction if a loop closure is detected by the localization module. To satisfy the real‐time processing requirement of online robotics applications, our presented system bounds the algorithmic complexity of the localization pipeline by restricting the number of variables to be optimized at each time instant. A dense map representation along with a local dense map reconstruction strategy is also proposed. Despite the limits that are imposed by the real‐time requirement and planning safety, the mapping quality of our method is comparable to other competitive methods. For implementations, we additionally introduce a few engineering considerations, such as the system architecture, the variable initialization, the memory management, the image processing, and so forth, to improve the system performance. Extensive experimental validations of our presented system are performed on the KITTI and NewCollege datasets, and through an online experiment around the Hong Kong University of Science and Technology (HKUST) university campus. We release our implementation as open‐source robot operating system (ROS) packages for the benefit of the community.  相似文献   

12.
This paper describes a novel real‐time multi‐spectral imaging capability for surveillance applications. The capability combines a new high‐performance multi‐spectral camera system with a distributed algorithm that computes a spectral‐screening principal component transform (PCT). The camera system uses a novel filter wheel design together with a high‐bandwidth CCD camera to allow image cubes to be delivered at 110 frames s with a spectral coverage between 400 and 1000 nm. The filters used in a particular application are selected to highlight a particular object based on its spectral signature. The distributed algorithm allows image streams from a dispersed collection of cameras to be disseminated, viewed, and interpreted by a distributed group of analysts in real‐time. It operates on networks of commercial‐off‐the‐shelf multiprocessors connected with high‐performance (e.g. gigabit) networking, taking advantage of multi‐threading where appropriate. The algorithm uses a concurrent formulation of the PCT to de‐correlate and compress a multi‐spectral image cube. Spectral screening is used to give features that occur infrequently (e.g. mechanized vehicles in a forest) equal importance to those that occur frequently (e.g. trees in the forest). A human‐centered color‐mapping scheme is used to maximize the impact of spectral contrast on the human visual system. To demonstrate the efficacy of the multi‐spectral system, plant‐life scenes with both real and artificial foliage are used. These scenes demonstrate the systems ability to distinguish elements of a scene that cannot be distinguished with the naked eye. The capability is evaluated in terms of visual performance, scalability, and real‐time throughput. Our previous work on predictive analytical modeling is extended to answer practical design questions such as ‘For a specified cost, what system can be constructed and what performance will it attain?’ Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

13.
We present a real‐time approach for acquiring 3D objects with high fidelity using hand‐held consumer‐level RGB‐D scanning devices. Existing real‐time reconstruction methods typically do not take the point of interest into account, and thus might fail to produce clean reconstruction results of desired objects due to distracting objects or backgrounds. In addition, any changes in background during scanning, which can often occur in real scenarios, can easily break up the whole reconstruction process. To address these issues, we incorporate visual saliency into a traditional real‐time volumetric fusion pipeline. Salient regions detected from RGB‐D frames suggest user‐intended objects, and by understanding user intentions our approach can put more emphasis on important targets, and meanwhile, eliminate disturbance of non‐important objects. Experimental results on real‐world scans demonstrate that our system is capable of effectively acquiring geometric information of salient objects in cluttered real‐world scenes, even if the backgrounds are changing.  相似文献   

14.
Existing real‐time volume rendering techniques which support global illumination are limited in modeling distinct realistic appearances for classified volume data, which is a desired capability in many fields of study for illustration and education. Directly extending the emission‐absorption volume integral with heterogeneous material shading becomes unaffordable for real‐time applications because the high‐frequency view‐dependent global lighting needs to be evaluated per sample along the volume integral. In this paper, we present a decoupled shading algorithm for multi‐material volume rendering that separates global incident lighting evaluation from per‐sample material shading under multiple light sources. We show how the incident lighting calculation can be optimized through a sparse volume integration method. The quality, performance and usefulness of our new multi‐material volume rendering method is demonstrated through several examples.  相似文献   

15.
This paper focuses on the stable and efficient simulation of articulated rigid body systems for real‐time applications. Specifically, we focus on the use of geometric stiffness which can dramatically increase simulation stability. We examine several numerical problems with the inclusion of geometric stiffness in the equations of motion, as proposed by previous work, and address these issues by introducing a novel method for efficiently building the linear system. This offers improved tractability and numerical efficiency. Furthermore, geometric stiffness tends to significantly dissipate kinetic energy. We propose an adaptive damping scheme, inspired by the geometric stiffness, that uses a stability criterion based on the numerical integrator to determine the amount of non‐constitutive damping required to stabilize the simulation. With this approach, not only is the dynamical behavior better preserved, but the simulation remains stable for mass ratios of 1,000,000‐to‐1 at time steps up to 0.1 s. We present a number of challenging scenarios to demonstrate that our method improves efficiency, and that it increases stability by orders of magnitude compared to previous work.  相似文献   

16.
无人车(UGV)可替代人类自主地执行民用和军事任务,对未来智能交通及陆军装备发展有重要战略意义。随着人工智能技术的日益成熟,采用强化学习技术成为了无人车智能决策领域最受关注的发展趋势之一。本文首先简要概述了强化学习的发展历程、基础原理和核心算法;随后,分析总结了强化学习在无人车智能决策中的研究进展,包括障碍物规避、变道与超车、车道保持和道路交叉口通行四种典型场景;最后,针对基于强化学习的智能决策面临的问题和挑战,探讨并展望了未来的研究工作与潜在的研究方向。  相似文献   

17.
An augmented reality book (AR book) is an application in which such multimedia elements as virtual 3D objects, movie clips, or sound clips are augmented to a conventional book using augmented reality technology. It can provide better understanding about the contents and visual impressions for users. For AR books, this paper presents a markerless tracking method, which recognizes and tracks a large number of pages in real‐time, even on PCs with low computation power. For fast recognition with respect to a large number of pages, we propose a generic randomized forest that is an extension of a randomized forest. In addition, we define the spatial locality of the subregions in an image to resolve the problem of a dropping recognition rate under a complex background. For tracking with minimal jittering, we also propose the adaptive keyframe‐based tracking method, which automatically updates the current frame as a keyframe when it describes the page better than the existing one. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

18.
The exponential growth of the Internet combined with the increasing popularity of streaming audio and video are pushing Internet bandwidth constraints to their limits. Methods of managing and more efficiently utilizing existing bandwidth are becoming increasingly vital. Using IP multicast to deliver content, especially streaming audio and video, can provide enormous bandwidth savings. A decade of effort at deploying multicast, combined with the rising need for better traffic management for bandwidth‐hungry applications has led to significant momentum for multicast use and deployment. One of the remaining barriers to widespread adoption is the lack of multicast monitoring and debugging tools. To address this need we introduce MHealth, a graphical, near real‐time multicast monitoring tool. MHealth utilizes existing tools to collect comprehensive data about Realtime Transport Protocol (RTP) based streaming audio/video sessions. By using a combination of application‐level protocol data for participant information and a multicast route tracing tool for topology information, MHealth is able to present a multicast tree's topology and information about the quality of received data. In this paper we describe the design and implementation of MHealth and include an example analysis of multicast tree statistics. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

19.
Recent advances in physically‐based simulations have made it possible to generate realistic animations. However, in the case of solid‐fluid coupling, wetting effects have rarely been noticed despite their visual importance especially in interactions between fluids and granular materials. This paper presents a simple particle‐based method to model the physical mechanism of wetness propagating through granular materials; Fluid particles are absorbed in the spaces between the granular particles and these wetted granular particles then stick together due to liquid bridges that are caused by surface tension and which will subsequently disappear when over‐wetting occurs. Our method can handle these phenomena by introducing a wetness value for each granular particle and by integrating those aspects of behavior that are dependent on wetness into the simulation framework. Using this method, a GPU‐based simulator can achieve highly dynamic animations that include wetting effects in real time.  相似文献   

20.
With ever‐increasing display resolution for wide field‐of‐view displays—such as head‐mounted displays or 8k projectors—shading has become the major computational cost in rasterization. To reduce computational effort, we propose an algorithm that only shades visible features of the image while cost‐effectively interpolating the remaining features without affecting perceived quality. In contrast to previous approaches we do not only simulate acuity falloff but also introduce a sampling scheme that incorporates multiple aspects of the human visual system: acuity, eye motion, contrast (stemming from geometry, material or lighting properties), and brightness adaptation. Our sampling scheme is incorporated into a deferred shading pipeline to shade the image's perceptually relevant fragments while a pull‐push algorithm interpolates the radiance for the rest of the image. Our approach does not impose any restrictions on the performed shading. We conduct a number of psycho‐visual experiments to validate scene‐ and task‐independence of our approach. The number of fragments that need to be shaded is reduced by 50 % to 80 %. Our algorithm scales favorably with increasing resolution and field‐of‐view, rendering it well‐suited for head‐mounted displays and wide‐field‐of‐view projection.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号