首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   56篇
  免费   10篇
  国内免费   11篇
电工技术   2篇
综合类   3篇
机械仪表   2篇
武器工业   1篇
无线电   19篇
一般工业技术   1篇
自动化技术   49篇
  2023年   3篇
  2022年   2篇
  2021年   2篇
  2020年   1篇
  2019年   1篇
  2018年   5篇
  2017年   3篇
  2016年   6篇
  2015年   5篇
  2014年   5篇
  2013年   4篇
  2012年   9篇
  2011年   4篇
  2010年   4篇
  2009年   3篇
  2008年   5篇
  2007年   5篇
  2006年   3篇
  2005年   2篇
  2004年   2篇
  2002年   2篇
  2001年   1篇
排序方式: 共有77条查询结果,搜索用时 31 毫秒
41.
Adaptive sensing involves actively managing sensor resources to achieve a sensing task, such as object detection, classification, and tracking, and represents a promising direction for new applications of discrete event system methods. We describe an approach to adaptive sensing based on approximately solving a partially observable Markov decision process (POMDP) formulation of the problem. Such approximations are necessary because of the very large state space involved in practical adaptive sensing problems, precluding exact computation of optimal solutions. We review the theory of POMDPs and show how the theory applies to adaptive sensing problems. We then describe a variety of approximation methods, with examples to illustrate their application in adaptive sensing. The examples also demonstrate the gains that are possible from nonmyopic methods relative to myopic methods, and highlight some insights into the dependence of such gains on the sensing resources and environment.
Alfred O. Hero IIIEmail:

Edwin K. P. Chong   received the BE(Hons) degree with First Class Honors from the University of Adelaide, South Australia, in 1987; and the MA and PhD degrees in 1989 and 1991, respectively, both from Princeton University, where he held an IBM Fellowship. He joined the School of Electrical and Computer Engineering at Purdue University in 1991, where he was named a University Faculty Scholar in 1999, and was promoted to Professor in 2001. Since August 2001, he has been a Professor of Electrical and Computer Engineering and a Professor of Mathematics at Colorado State University. His research interests span the areas of communication and sensor networks, stochastic modeling and control, and optimization methods. He coauthored the recent best-selling book, An Introduction to Optimization, 3rd Edition, Wiley-Interscience, 2008. He is currently on the editorial board of the IEEE Transactions on Automatic Control, Computer Networks, Journal of Control Science and Engineering, and IEEE Expert Now. He is a Fellow of the IEEE, and served as an IEEE Control Systems Society Distinguished Lecturer. He received the NSF CAREER Award in 1995 and the ASEE Frederick Emmons Terman Award in 1998. He was a co-recipient of the 2004 Best Paper Award for a paper in the journal Computer Networks. He has served as Principal Investigator for numerous funded projects from NSF, DARPA, and other funding agencies. Christopher M. Kreucher   received the BS, MS, and PhD degrees in Electrical Engineering from the University of Michigan in 1997, 1998, and 2005, respectively. He is currently a Senior Systems Engineer at Integrity Applications Incorporated in Ann Arbor, Michigan. His current research interests include nonlinear filtering (specifically particle filtering), Bayesian methods of fusion and multitarget tracking, self localization, information theoretic sensor management, and distributed swarm management. Alfred O. Hero III   received the BS (summa cum laude) from Boston University (1980) and the PhD from Princeton University (1984), both in Electrical Engineering. Since 1984 he has been with the University of Michigan, Ann Arbor, where he is a Professor in the Department of Electrical Engineering and Computer Science and, by courtesy, in the Department of Biomedical Engineering and the Department of Statistics. He has held visiting positions at Massachusetts Institute of Technology (2006), Boston University, I3S University of Nice, Sophia-Antipolis, France (2001), Ecole Normale Superieure de Lyon (1999), Ecole Nationale Superieure des Telecommunications, Paris (1999), Scientific Research Labs of the Ford Motor Company, Dearborn, Michigan (1993), Ecole Nationale Superieure des Techniques Avancees (ENSTA), Ecole Superieure d’Electricite, Paris (1990), and M.I.T. Lincoln Laboratory (1987–1989). His recent research interests have been in areas including: inference for sensor networks, adaptive sensing, bioinformatics, inverse problems. and statistical signal and image processing. He is a Fellow of the Institute of Electrical and Electronics Engineers (IEEE), a member of Tau Beta Pi, the American Statistical Association (ASA), the Society for Industrial and Applied Mathematics (SIAM), and the US National Commission (Commission C) of the International Union of Radio Science (URSI). He has received a IEEE Signal Processing Society Meritorious Service Award (1998), IEEE Signal Processing Society Best Paper Award (1998), a IEEE Third Millenium Medal and a 2002 IEEE Signal Processing Society Distinguished Lecturership. He was President of the IEEE Signal Processing Society (2006–2007) and during his term served on the TAB Periodicals Committee (2006). He was a member of the IEEE TAB Society Review Committee (2008) and is Director-elect of IEEE for Division IX (2009).   相似文献   
42.
In highly flexible and integrated manufacturing systems, such as semiconductor fabs, strong interactions between the equipment condition, operations executed on the various machines and the outgoing product quality necessitate integrated decision making in the domains of maintenance scheduling and production operations. Furthermore, in highly complex manufacturing equipment, the underlying condition is not directly observable and can only be inferred probabilistically from the available sensor readings. In order to deal with interactions between maintenance and production operations in Flexible Manufacturing Systems (FMSs) in which equipment conditions are not perfectly observable, we propose in this paper a decision-making method based on a Partially Observable Markov Decision Processes (POMDP's), yielding an integrated policy in the realms of maintenance scheduling and production sequencing. Optimization was pursued using a metaheuristic method that used the results of discrete-event simulations of the underlying manufacturing system. The new approach is demonstrated in simulations of a generic semiconductor manufacturing cluster tool. The results showed that, regardless of uncertainties in the knowledge of actual equipment conditions, jointly making maintenance and production sequencing decisions consistently outperforms the current practice of making these decisions separately.  相似文献   
43.
This paper explains how Partially Observable Markov Decision Processes (POMDPs) can provide a principled mathematical framework for modelling the inherent uncertainty in spoken dialogue systems. It briefly summarises the basic mathematics and explains why exact optimisation is intractable. It then describes in some detail a form of approximation called the Hidden Information State model which does scale and which can be used to build practical systems. A prototype HIS system for the tourist information domain is evaluated and compared with a baseline MDP system using both user simulations and a live user trial. The results give strong support to the central contention that the POMDP-based framework is both a tractable and powerful approach to building more robust spoken dialogue systems.  相似文献   
44.
基于点的值迭代方法是求解部分可观测马尔科夫决策过程(POMDP)问题的一类有效算法.目前基于点的值迭代算法大都基于单一启发式标准探索信念点集,从而限制算法效果.基于此种情况,文中提出基于杂合标准探索信念点集的值迭代算法(HHVI),可以同时维持值函数的上界和下界.在扩展探索点集时,选取值函数上下界差值大于阈值的信念点进行扩展,并且在值函数上下界差值大于阈值的后继信念点中选择与已探索点集距离最远的信念点进行探索,保证探索点集尽量有效分布于可达信念空间内.在4个基准问题上的实验表明,HHVI能保证收敛效率,并能收敛到更好的全局最优解.  相似文献   
45.
基于部分可观测Markov决策过程理论的盾构推进载荷规划   总被引:1,自引:0,他引:1  
针对盾构掘进过程中位姿控制问题,提出了基于部分可观测马尔科夫决策过程(Partially observable Markov decision processes, POMDP)理论的推进载荷规划方法。在推进载荷规划模型中,将盾构自动纠偏看成不确定环境下序贯决策问题,充分考虑掘进过程中随机因素的影响,将盾构掘进过程中受到的阻力、推进载荷和盾构位姿分别定义为POMDP的状态集、行动集和观测集,然后重点讨论了信念状态、状态转移函数和观测函数等几个关键参数的获取方法。在计算值函数时,考虑了盾构位姿偏离程度和盾构载荷平稳程度对推进载荷决策的影响,建立了立即收益函数和长期折扣收益函数,并采用基于点的值迭代算法寻求推进载荷最优规划策略。针对天津地铁9号线进行了案例分析,结果表明基于POMDP的推进载荷规划方法是合理有效的,能够顺应掘进阻力随机变化。  相似文献   
46.
本文研究了具有ARQ功能的基于衰落信道和数据链路层缓冲区队列状态的资源最优分配问题,目标是通过自适应调整功率分配和调制方式,在系统平均功率的限制下,使系统的吞吐量达到最大。在这个系统中并不限制ARQ的重发次数,所以最大化系统的吞吐量等效于使链路层的缓冲区溢出的数据包最小。本文把这样一个优化问题构造为马尔可夫决策过程,并提出了用动态规划解决该问题的方法。出于实用性的考虑,本文还提出了一种简单的次优资源分配方法,仿真结果显示这种方法与最优的调度方法性能非常接近。  相似文献   
47.
结合启发式求解和增强学习技术,深入研究了基于实例的POMDP问题的近似求解算法,包括基于最近邻算法法的NNI及它的参数化增强版本ENNI和基于局部加权回归算法的LWI,并通过实验对比,给出了相应算法在实际应用中的性能。实验证明,基于实例的方法来求解POMDP问题,能够获得性能较好的次优解。  相似文献   
48.
一种动态不确定性环境中的持续规划系统   总被引:6,自引:1,他引:5  
李响  陈小平 《计算机学报》2005,28(7):1163-1170
规划是人工智能研究的一个重要方向,具有极其广泛的应用背景.近年来,研究重点已经转移到动态不确定性环境中的规划问题.该文将部分可观察马尔可夫决策过程(POMDP)和过程性推理系统(PRS)的优点相结合,提出一种对动态不确定环境具有更全面适应能力的持续规划系统——POMDPRS.该系统利用PRS的持续规划机制,交叉地进行规划与执行,在一定条件下提高了动态环境中POMDP决策的效率;另一方面,用POMDP的概率分布信念模型和极大效用原理替代PRS的一阶逻辑信念表示和计划选择机制,大大增强了处理环境不确定性的能力.  相似文献   
49.
研究了具有ARQ功能的基于衰落信道和数据链路层缓冲区队列状态的资源最优分配问题,为了通过自适应调整功率分配和调制方式,在系统平均功率的限制下,使系统的吞吐量达到最大,该文把这个优化问题构造为马尔可夫决策过程,并提出了用动态规划解决该问题的方法。  相似文献   
50.
于丹宁  倪坤  刘云龙 《计算机工程》2021,47(2):90-94,102
基于卷积神经网络的部分可观测马尔科夫决策过程(POMDP)值迭代算法QMDP-net在无先验知识的情况下具有较好的性能表现,但其存在训练效果不稳定、参数敏感等优化难题.提出基于循环卷积神经网络的POMDP值迭代算法RQMDP-net,使用门控循环单元网络实现值迭代更新,在保留输入和递归权重矩阵卷积特性的同时增强网络时序...  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号