首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 187 毫秒
1.
李彦平  郭令忠 《控制与决策》1997,12(A00):430-434,440
基于D-自动机模型,通过广义状态谓词空间与半范数的概念,深入研究了实时DEDS的状态行为及最速控制问题,最后讨论了此类控制问题解的存在性。  相似文献   

2.
离散事件动态系统中的控制综合问题   总被引:2,自引:0,他引:2  
本文将离散事件动态系统(DEDS)监控方法中的控制综合问题作了系统的分类,得到了六种控制综合问题,并将它们表示成泛函极值问题,讨论了它们的可行解,最优解的存在性,可生解集的结构以及相互之间的关系。  相似文献   

3.
介绍了大型辊变成型连轧同组计算机控制系统,阐述了SIEMENSS7-300可编程控制器和欧陆SSD590全数字直流传动系统在该系统中的应用,并说明了采用微张力控制方案后系统的运行情况和控制效果。  相似文献   

4.
NHEDB是在关系DBMS基础上研制的可扩充DBMS,系统采用扩充关系数据模型,主要是支持ADT和NF^2。本文介绍了系统的数据模型、可扩充接口及可扩充工提出了一种以ADT为基础,数据字典为核心的实现方案。系统具有处理复杂对象及用户扩充系统功能的能力。  相似文献   

5.
当前的实用知识库系统研究是将知识库查询语言嵌入到一个过程语言中.KBASE-P是一个通用的知识库程序设计语言.KBASE-P以KBASE作为查询语言,以FD-PROLOG(我们开发的一个PROLOG扩充)为过程性的宿主语言执行1/O和DB更新操作(用扩充的内部谓词).由于良好的设计和实现,查询语言与宿主语言之间的阻抗不匹配问题相对较小.因而,KBASE-P是一个比较实用的知识库程序设计语言.KBASE-P系统支持逻辑程序设计语言(KBASE-P语言)的程序开发,提供了文本编辑、文件管理、谓词管理、事实操作、Datalog查询、SQL查询等功能.本文详细介绍了KBASE-P系统的设计和实现.  相似文献   

6.
本文针对面向定做式人工关节的敏捷制造系统中信息交流的需要,讨论了ISDN的优势及ISDN接入INTERNET的实现,并介绍了ISDN在这个系统中的应用。  相似文献   

7.
本文介绍了一个扩展关系模型数据库系统ERDB和在该系统之上开发的一个GIS系统,重点讨论了GIS中的拓扑表示和递归查询问题,并从中说明这种扩展的重要意义。  相似文献   

8.
基于CPLD的黑白全电视信号采集系统   总被引:8,自引:0,他引:8  
肖亮  沈建军  李飚  沈振康 《微处理机》1999,(4):21-25,33
介绍了一个以CPLD为控制单元的黑白金电视信号采集系统。在介绍了一种CPLD器件-EPM7128SLC84的特点和结构后,详细讨论了采集系统的结构和CPLD的控制逻辑。这种设计具有功能集成度高、实现方便的优点。  相似文献   

9.
关系数据库上SDAI实现的研究   总被引:1,自引:0,他引:1       下载免费PDF全文
SDAI是STEP的数据接口规范,它为工程应用与EXPRESS描述的数据模型间定义了一个功能界面。本文讨论了关系数据库上SDAI实现的方法,阐述了STEP模型与关系模型的一种转换机制,开发了关系数据库上数据交换SDAI接口函数库,为STEP的工程应用奠定了基础  相似文献   

10.
具有面向对象特征和知识库系统   总被引:1,自引:0,他引:1  
本文讨论了具有面向对象特征的知识库系统KBASE^+的数据模型,语言及实现。KBASE^+的数据模型可可以方便地支持对象标识,类层次,多继承等面和对象概念。描述性查询语言KBL是DATALOG针对于非一范式关系模型的扩充。本文重构了KBL的语义理论框架,提出了通过计算相关的下确界来解决属性继承中的冲突问题,通过在KBL程序中添加规则来实现实例继承的方案。本文说明了KBL程序可以转换成语义等价的DA  相似文献   

11.
Recent literature has introduced and validated a signed real measure of regular languages for quantitative analysis and synthesis of discrete-event supervisory (DES) control systems, where all events are assumed to be observable. This paper presents a modification of the language measure for supervisory control under partial observation and shows how to generalize the analysis when some of the events may not be observable at the supervisory level. In the context of DES control synthesis, the language measure of partially observable discrete-event processes is expressed in a closed form which is structurally similar to that of completely observable discrete-event processes. Examples are provided to elucidate the concept of DES control under partial observation.  相似文献   

12.
In this paper, we investigate the verification of codiagnosability for discrete event systems (DES). That is, it is desired to ascertain if the occurrence of system faults can be detected based on the information of multiple local sites that partially observe the overall DES. As an improvement of existing codiagnosability tests that resort to the original DES with a potentially computationally infeasible state space, we propose a method that employs an abstracted system model on a smaller state space for the codiagnosability verification. Furthermore, we show that this abstraction can be computed without explicitly evaluating the state space of the original model in the practical case where the DES is composed of multiple subsystems.  相似文献   

13.
This paper deals with the on-line control of partially observed discrete event systems (DES). The goal is to restrict the behavior of the system within a prefix-closed legal language while accounting for the presence ofuncontrollable andunobservable events. In the spirit of recent work on the on-line control of partially observed DES (Heymann and Lin 1994) and on variable lookahead control of fully observed DES (Ben Hadj-Alouane et al. 1994c), we propose an approach where, following each observable event, a control action is computed on-line using an algorithm oflinear worst-case complexity. This algorithm, calledVLP-PO, has the following additional properties: (i) the resulting behavior is guaranteed to be amaximal controllable and observable sublanguage of the legal language; (ii) different maximals may be generated by varying the priorities assigned to the controllable events, a parameter ofVLP-PO; (iii) a maximal containing the supremal controllable and normal sublanguage of the legal language can be generated by a proper selection of controllable event priorities; and (iv) no off-line calculations are necessary. We also present a parallel/distributed version of theVLP-PO algorithm calledDI-VLP-PO. This version uses several communicating agents that simultaneously run (on-line) identical versions of the algorithm but on possibly different parts of the system model and the legal language, according to the structural properties of the system and the specifications. While achieving the same behavior asVLO-PO, DI-VLP-PO runs at a total complexity (for computation and communication) that is significantly lower than its sequential counterpart.  相似文献   

14.
This paper observes a job search problem on a partially observable Markov chain, which can be considered as an extension of a job search in a dynamic economy in [1]. This problem is formulated as the state changes according to a partially observable Markov chain, i.e., the current state cannot be observed but there exists some information regarding what a present state is. All information about the unobservable state are summarized by the probability distributions on the state space, and we employ the Bayes' theorem as a learning procedure. The total positivity of order two, or simply TP2, is a fundamental property to investigate sequential decision problems, and it also plays an important role in the Bayesian learning procedure for a partially observable Markov process. By using this property, we consider some relationships among prior and posterior information, and the optimal policy. We will also observe the probabilities to make a transition into each state after some additional transitions by empolying the optimal policy. In the stock market, suppose that the states correspond to the business situation of one company and if there is a state designating the default, then the problem is what time the stocks are sold off before bankrupt, and the probability to become bankrupt will be also observed.  相似文献   

15.
The main weakness of all control methodologies is the dependency of feedbacks to full state measurements. In practical situations, measuring the states of a given system may fail because sometimes the measurements are impossible and sometimes, possible, but too expensive. Observer design for highly nonlinear dynamics is an important issue, particularly when the locally observable dynamics are not linearly observable. In such circumstances the ability to reduce the system to observable or observer form is key to observer design. This paper provides two observers for nonlinear systems given in Brunovski form. The first observer is a high-gain observer with a classical output injection form, while the second is a high-gain observer with a q-integral path. Finally, the discrete-time implementation of the high-gain observer is discussed in linear matrix inequality framework. A motivating example is shown to highlight the efficacy of the developed observers.  相似文献   

16.
提出了一个新的效用聚类激励学习算法U-Clustering。该算法完全不用像U-Tree算法那样进行边缘节点的生成和测试,它首先根据实例链的观测动作值对实例进行聚类,然后对每个聚类进行特征选择,最后再进行特征压缩,经过压缩后的新特征就成为新的状态空间树节点。通过对NewYorkDriving[2,13]的仿真和算法的实验分析,表明U-Clustering算法对解决大型部分可观测环境问题是比较有效的算法。  相似文献   

17.
The absolute controllability of predicates in discrete event systems is studied in this paper. A predicate is absolutely controllable if it is control-invariant and the states specified by it are mutually reachable via legal states. It is shown that there is a global state feedback such that the resultant closed-loop system is strongly connected if and only if the predicate is absolutely controllable. The weakest absolutely controllable predicate stronger than the given predicate is shown to exist with respect to the given initial state. Based on the notion of the dual automaton a graph-theoretic algorithm is given to compute the set of weakest absolutely controllable predicates stronger than the given predicate. Application of the concept of absolutely controllable predicate to a class of optimal control problem is discussed. Examples are given to illustrate the results  相似文献   

18.
Robot problems are examined in the context of semantic networks which are used to represent the state of a problem and the operators useful for solving it. Graph transformation algorithms are discussed as an aid to problem solving. Although these form only a small subset of the first-order predicate calculus based systems, considerations such as subgoal circularity, partially specified states and multiple manipulators sharing the same environment may warrant this simplification.  相似文献   

19.
Fujita H  Ishii S 《Neural computation》2007,19(11):3051-3087
Games constitute a challenging domain of reinforcement learning (RL) for acquiring strategies because many of them include multiple players and many unobservable variables in a large state space. The difficulty of solving such realistic multiagent problems with partial observability arises mainly from the fact that the computational cost for the estimation and prediction in the whole state space, including unobservable variables, is too heavy. To overcome this intractability and enable an agent to learn in an unknown environment, an effective approximation method is required with explicit learning of the environmental model. We present a model-based RL scheme for large-scale multiagent problems with partial observability and apply it to a card game, hearts. This game is a well-defined example of an imperfect information game and can be approximately formulated as a partially observable Markov decision process (POMDP) for a single learning agent. To reduce the computational cost, we use a sampling technique in which the heavy integration required for the estimation and prediction can be approximated by a plausible number of samples. Computer simulation results show that our method is effective in solving such a difficult, partially observable multiagent problem.  相似文献   

20.
Reinforcement learning (RL) has been widely used to solve problems with a little feedback from environment. Q learning can solve Markov decision processes (MDPs) quite well. For partially observable Markov decision processes (POMDPs), a recurrent neural network (RNN) can be used to approximate Q values. However, learning time for these problems is typically very long. We present a new combination of RL and RNN to find a good policy for POMDPs in a shorter learning time. This method contains two phases: firstly, state space is divided into two groups (fully observable state group and hidden state group); secondly, a Q value table is used to store values of fully observable states and an RNN is used to approximate values for hidden states. Results of experiments in two grid world problems show that the proposed method enables an agent to acquire a policy with better learning performance compared to the method using only a RNN.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号