首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
为实现复杂任务环境中多无人机的自主飞行, 本文采用改进的强化学习算法,设计了一种具有避碰避障功能的多无人机智能航迹规划策略。通过改进搜索策略、引入具有近似功能的神经网络函数、构造合理的立即回报函数等方法,提高算法运算的灵活性、降低无人机运算负担, 使得多无人机能够考虑复杂任务环境中风速等随机因素以及静态和动态威胁的影响, 自主规划出从初始位置到指定目标点的安全可行航迹。为了探索所提算法在实际飞行过程的可行性, 本文以四旋翼无人机为实验对象, 在基于ROS的仿真环境中验证了算法的可行性与有效性。  相似文献   

2.
利用机器学习方法解决存储领域中若干技术难题是目前存储领域的研究热点之一。强化学习作为一种以环境反馈作为输入、自适应环境的特殊的机器学习方法,能通过观测环境状态的变化,评估控制决策对系统性能的影响来选择最优的控制策略,基于强化学习的智能RAID控制技术具有重要的研究价值。本文针对高性能计算应用特点,将机器学习领域中的强化学习技术引入RAID控制器中,提出了基于强化学习的智能I/O调度算法RL-scheduler,利用Q-学习策略实现了面向并行应用的自治调度策略。RL-scheduler综合考虑了调度的公平性、磁盘寻道时间和MPI应用的I/O访问效率,并提出多Q-表交叉组织方法提高Q-表的更新效率。实验结果表明,RL-scheduler缩短了并行应用的平均I/O服务时间,提高了大规模并行计算系统的I/O吞吐率。  相似文献   

3.
ERP implementations remain problematic despite the fact that many of the issues are by now quite well known. In this paper, we take a different perspective from the critical success factors and risks approaches that are common in the information systems discipline to explain why ERP implementations fail. Specifically, we adapt Sitkin's theory of intelligent failure to ERP implementations resulting in a theory that we call learning from failure. We then examine from the viewpoint of this theory the details of two SAP R/3 implementations, one of which failed while the other succeeded. Although it is impossible to state, unequivocally, that the implementation that failed did so because it did not use the approach that was derived from the theory, the analysis reveals that the company that followed many of the tenets of the theory succeeded while the other did not.  相似文献   

4.
王龙  黄锋 《自动化学报》2023,49(3):580-613
近年来, 人工智能(Artificial intelligence, AI)技术在棋牌游戏、计算机视觉、自然语言处理和蛋白质结构解析与预测等研究领域取得了众多突破性进展, 传统学科之间的固有壁垒正在被逐步打破, 多学科深度交叉融合的态势变得越发明显. 作为现代智能科学的三个重要组成部分, 博弈论、多智能体学习与控制论自诞生之初就逐渐展现出一种“你中有我, 我中有你” 的关联关系. 特别地, 近年来在AI技术的促进作用下, 这三者间的交叉研究成果正呈现出一种井喷式增长的态势. 为及时反映这一学术动态和趋势, 本文对这三者的异同、联系以及最新的研究进展进行了系统梳理. 首先, 介绍了作为纽带连接这三者的四种基本博弈形式, 进而论述了对应于这四种基本博弈形式的多智能体学习方法; 然后, 按照不同的专题, 梳理了这三者交叉研究的最新进展; 最后, 对这一新兴交叉研究领域进行了总结与展望.  相似文献   

5.
Lundqvist  Thomas  Stenström  Per 《Real-Time Systems》1999,17(2-3):183-207
Previously published methods for estimation of the worst-case execution time on high-performance processors with complex pipelines and multi-level memory hierarchies result in overestimations owing to insufficient path and/or timing analysis. This does not only give rise to poor utilization of processing resources but also reduces the schedulability in real-time systems. This paper presents a method that integrates path and timing analysis to accurately predict the worst-case execution time for real-time programs on high-performance processors. The unique feature of the method is that it extends cycle-level architectural simulation techniques to enable symbolic execution with unknown input data values; it uses alternative instruction semantics to handle unknown operands. We show that the method can exclude many infeasible (or non-executable) program paths and can calculate path information, such as bounds on number of loop iterations, without the need for manual annotations of programs. Moreover, the method is shown to accurately analyze timing properties of complex features in high-performance processors using multiple-issue pipelines and instruction and data caches. The combined path and timing analysis capability is shown to derive exact estimates of the worst-case execution time for six out of seven programs in our benchmark suite.  相似文献   

6.
Reinforcement Learning (RL) is learning through directexperimentation. It does not assume the existence of a teacher thatprovides examples upon which learning of a task takes place. Instead, inRL experience is the only teacher. With historical roots on the study ofbiological conditioned reflexes, RL attracts the interest of Engineersand Computer Scientists because of its theoretical relevance andpotential applications in fields as diverse as Operational Research andIntelligent Robotics.Computationally, RL is intended to operate in a learning environmentcomposed by two subjects: the learner and a dynamic process. Atsuccessive time steps, the learner makes an observation of the processstate, selects an action and applies it back to the process. Its goal isto find out an action policy that controls the behavior of the dynamicprocess, guided by signals (reinforcements) that indicate how badly orwell it has been performing the required task. These signals are usuallyassociated to a dramatic condition – e.g., accomplishment of a subtask(reward) or complete failure (punishment), and the learner tries tooptimize its behavior by using a performance measure (a function of thereceived reinforcements). The crucial point is that in order to do that,the learner must evaluate the conditions (associations between observedstates and chosen actions) that led to rewards or punishments.Starting from basic concepts, this tutorial presents the many flavorsof RL algorithms, develops the corresponding mathematical tools, assesstheir practical limitations and discusses alternatives that have beenproposed for applying RL to realistic tasks.  相似文献   

7.
基于深度强化学习的平行企业资源计划   总被引:1,自引:0,他引:1  
秦蕊  曾帅  李娟娟  袁勇 《自动化学报》2017,43(9):1588-1596
传统的企业资源计划(Enterprise resource planning,ERP)采用静态化的业务流程设计理念,忽略了人的关键作用,且很少涉及系统性的过程模型,因此难以应对现代企业资源计划的复杂性要求.为实现现代企业资源计划的新范式,本文在ACP(人工社会(Artificial societies)、计算实验(Computational experiments)、平行执行(Parallel execution))方法框架下,以大数据为驱动,融合深度强化学习方法,构建基于平行管理的企业ERP系统.首先基于多Agent构建ERP整体建模框架,然后针对企业ERP的整个流程建立序贯博弈模型,最后运用基于深度强化学习的神经网络寻找最优策略,解决复杂企业ERP所面临的不确定性、多样性和复杂性.  相似文献   

8.
随着移动互联网的快速发展,许多利用手机App打车的网约车平台也应运而生.这些网约车平台大大减少了网约车的空驶时间和乘客等待时间,从而提高了交通效率.作为平台核心模块,网约车路径规划问题致力于调度空闲的网约车以服务潜在的乘客,从而提升平台的运营效率,近年来受到广泛关注.现有研究主要采用基于值函数的深度强化学习算法(如deep Q-network, DQN)来解决这一问题.然而,由于基于值函数的方法存在局限,无法应用到高维和连续的动作空间.提出了一种具有动作采样策略的执行者-评论者(actor-critic with action sampling policy, AS-AC)算法来学习最优的空驶网约车调度策略,该方法能够感知路网中的供需分布,并根据供需不匹配度来确定最终的调度位置.在纽约市和海口市的网约车订单数据集上的实验表明,该算法取得了比对比算法更低的请求拒绝率.  相似文献   

9.
In countries where investment incentives and tax exemptions are offered, directing investments into high priority sectors and areas, evaluation of investment alternatives and methods of financing them necessitates an integrated approach. This paper presents an integrated investment and financial planning model developed for a Turkish company where the objective is maximization of after tax cash flows. Experiments conducted with the model demonstrate that initial financial structure and financial ratio restrictions are determinant on the solution, and that decisions based on investment incentive and tax exemption schemes only may not be optimal. Another group of experiments demonstrate the use of the model as a strategic planning tool to investigate the sensitivity of decisions to parameter values and areas where further data analysis and marketing research would be helpful.  相似文献   

10.
学习、交互及其结合是建立健壮、自治agent的关键必需能力。强化学习是agent学习的重要部分,agent强化学习包括单agent强化学习和多agent强化学习。文章对单agent强化学习与多agent强化学习进行了比较研究,从基本概念、环境框架、学习目标、学习算法等方面进行了对比分析,指出了它们的区别和联系,并讨论了它们所面临的一些开放性的问题。  相似文献   

11.
As a representative emerging machine learning technique, federated learning (FL) has gained considerable popularity for its special feature of “making data available but not visible”. However, potential problems remain, including privacy breaches, imbalances in payment, and inequitable distribution. These shortcomings let devices reluctantly contribute relevant data to, or even refuse to participate in FL. Therefore, in the application of FL, an important but also challenging issue is to motivate as many participants as possible to provide high-quality data to FL. In this paper, we propose an incentive mechanism for FL based on the continuous zero-determinant (CZD) strategies from the perspective of game theory. We first model the interaction between the server and the devices during the FL process as a continuous iterative game. We then apply the CZD strategies for two players and then multiple players to optimize the social welfare of FL, for which we prove that the server can keep social welfare at a high and stable level. Subsequently, we design an incentive mechanism based on the CZD strategies to attract devices to contribute all of their high-accuracy data to FL. Finally, we perform simulations to demonstrate that our proposed CZD-based incentive mechanism can indeed generate high and stable social welfare in FL.  相似文献   

12.
无人车(UGV)可替代人类自主地执行民用和军事任务,对未来智能交通及陆军装备发展有重要战略意义。随着人工智能技术的日益成熟,采用强化学习技术成为了无人车智能决策领域最受关注的发展趋势之一。本文首先简要概述了强化学习的发展历程、基础原理和核心算法;随后,分析总结了强化学习在无人车智能决策中的研究进展,包括障碍物规避、变道与超车、车道保持和道路交叉口通行四种典型场景;最后,针对基于强化学习的智能决策面临的问题和挑战,探讨并展望了未来的研究工作与潜在的研究方向。  相似文献   

13.
现有基于深度强化学习的机械臂轨迹规划方法在未知环境中学习效率偏低,规划策略鲁棒性差。为了解决上述问题,提出了一种基于新型方位奖励函数的机械臂轨迹规划方法A-DPPO,基于相对方向和相对位置设计了一种新型方位奖励函数,通过降低无效探索,提高学习效率。将分布式近似策略优化(DPPO)首次用于机械臂轨迹规划,提高了规划策略的鲁棒性。实验证明相比现有方法,A-DPPO有效地提升了学习效率和规划策略的鲁棒性。  相似文献   

14.
程序执行时间的静态预估与可视化分析方法   总被引:3,自引:0,他引:3       下载免费PDF全文
软件时间性能分析与评估技术是实时软件开发中的一个重要课题.提出了一种基于控制流程图的程序执行时间的可视化分析框架,研究了中间代码段与源程序中语句的对应关系的自动分析、源程序语句行的CPU周期数的提取和计算方法、基于控制流程图的点到点最大时间分析算法和CPU周期的绝对时间估计方法.设计并实现了一个实用的基于控制流程图的程序执行时间静态分析与评估工具.最后,对研究工作进行了相关比较和总结.  相似文献   

15.
针对基于深度强化学习的机械臂轨迹规划方法学习效率较低,规划策略鲁棒性差的问题,提出了一种基于语音奖励函数的机械臂轨迹规划方法,利用语音定义规划任务的不同状态,并采用马尔科夫链对状态进行建模,为轨迹规划提供全局指导,降低深度强化学习优化的盲目性。提出的方法结合了基于语音的全局信息和基于相对距离的局部信息来设计奖励函数,在每个状态根据相对距离与语音指导的契合程度对机械臂进行奖励或惩罚。实验证明,设计的奖励函数能够有效地提升基于深度强化学习的机械臂轨迹规划的鲁棒性和收敛速度。  相似文献   

16.
针对当前强化学习算法在无人机升空平台路径规划任务中样本效率低、算法鲁棒性较差的问题,提出一种基于模型的内在奖励强化学习算法。采用并行架构将数据收集操作和策略更新操作完全解耦,提升算法学习效率,并运用内在奖励的方法提高智能体对环境的探索效率,避免收敛到次优策略。在策略学习过程中,智能体针对模拟环境的动态模型进行学习,从而在有限步内更好地预测状态、奖励等信息。在此基础上,通过结合有限步的规划计算以及神经网络的预测,提升价值函数的预测精准度,以利用较少的经验数据完成智能体的训练。实验结果表明,相比同样架构的无模型强化学习算法,该算法达到相同训练水平所需的经验数据量减少近600幕数据,样本效率和算法鲁棒性都有大幅提升,相比传统的非强化学习启发类算法,分数提升接近8 000分,与MVE等主流的基于模型的强化学习算法相比,平均分数可以提升接近2 000分,且在样本效率和稳定性上都有明显提高。  相似文献   

17.
教学的个性化和智能化是智能教学系统研究的重点和难点。文章采用智能代理技术模拟系统中学生的智能和行为方式,将强化学习理论应用于多代理体,设计了结合资格迹理论的强化学习算法,并用以生成和调整适合于每个学生个体的教学内容和教学策略。多代理体技术实现了教学的个性化,强化学习算法使得教学策略具有智能化。实验结果表明,新的算法较原有算法更为有效。  相似文献   

18.
简述了研究船舶拟人智能避碰决策(PIDVCA)的意义、目标及其实现方法,提出了基于机器学习构建动态避碰知识库的关键技术及PIDVCA理论的机器学习机制,并结合避碰仿真实例,着重讨论了PIDVCA理论的集成机器学习策略以及获取动态避碰知识的机理.  相似文献   

19.
Long-Ji Lin 《Machine Learning》1992,8(3-4):293-321
To date, reinforcement learning has mostly been studied solving simple learning tasks. Reinforcement learning methods that have been studied so far typically converge slowly. The purpose of this work is thus two-fold: 1) to investigate the utility of reinforcement learning in solving much more complicated learning tasks than previously studied, and 2) to investigate methods that will speed up reinforcement learning.This paper compares eight reinforcement learning frameworks: adaptive heuristic critic (AHC) learning due to Sutton, Q-learning due to Watkins, and three extensions to both basic methods for speeding up learning. The three extensions are experience replay, learning action models for planning, and teaching. The frameworks were investigated using connectionism as an approach to generalization. To evaluate the performance of different frameworks, a dynamic environment was used as a testbed. The environment is moderately complex and nondeterministic. This paper describes these frameworks and algorithms in detail and presents empirical evaluation of the frameworks.  相似文献   

20.
为了解决传统深度强化学习在室内未知环境下移动机器人路径规划中存在探索能力差和环境状态空间奖励稀疏的问题,提出了一种基于深度图像信息的改进深度强化学习算法。利用Kinect视觉传感器直接获取的深度图像信息和目标位置信息作为网络的输入,以机器人的线速度和角速度作为下一步动作指令的输出。设计了改进的奖惩函数,提高了算法的奖励值,优化了状态空间,在一定程度上缓解了奖励稀疏的问题。仿真结果表明,改进算法提高了机器人的探索能力,优化了路径轨迹,使机器人有效地避开了障碍物,规划出更短的路径,简单环境下比DQN算法的平均路径长度缩短了21.4%,复杂环境下平均路径长度缩短了11.3%。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号