首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 93 毫秒
1.
论文提出一种基于模糊逻辑和神经网络的自学习网络模型和一种结合自组织学习和BP学习的BPSOM混合学习算法。该模型通过BPSOM算法训练样本,能自动生成模糊逻辑规则,调节输入、输出变量的隶属函数;而且该算法比通常的BP学习算法收敛性好,速度快。仿真结果表明,利用该学习网络模型构造的同步发电机励磁控制器,能很好地稳定机端电压。  相似文献   

2.
利用模糊逻辑系统,设计了用于控制电力系统暂态稳定性的静止无功补偿器(SVC)模糊自适应控制器。该控制器采用反向传播学习算法自动调节控制规则,并通过辨识器部分获取数据修改控制器参数。通过一个两机系统的计算机仿真,证明该SVC模糊控制器能有效地控制系统的暂态稳定性。  相似文献   

3.
一种混合学习算法   总被引:2,自引:0,他引:2  
沈智鹏  郭晨 《计算机工程》2003,29(21):12-13,27
提出了一种基于模糊逻辑和神经网络的自学习网络模型和一种结合自组织学习和BP学习的BPSOM混合学习算法。该模型通过BPSOM算法训练样本,能自动生成模糊逻辑规则,调节输入、输出变量的隶属函数;而且该算法比通常的BP学习算法收敛性好、速度快.  相似文献   

4.
提出一种从科技文献等文档中自动抽取元数据的方法,将自动归纳法和相似特征度算法结合起来,基于特征相似的归纳学习算法自动生成抽取规则,并对文档进行元数据的自动抽取。这种方法利用文档自身某些特有属性,对文档的内容进行分块,利用归纳法自动生成抽取规则,并结合特征相似度对生成规则进行匹配,然后对文档元数据信息进行自动抽取,提高了自动生成规则的效率和抽取元数据信息的准确率。  相似文献   

5.
基于递阶遗传算法的模糊控制器的规则生成和参数整定   总被引:3,自引:0,他引:3  
张兴华 《信息与控制》2006,35(3):304-308
提出了一种基于递阶遗传算法的模糊控制器的优化设计方法.采用具有层次结构染色体编码方式的遗传算法来设计模糊控制器,实现了语言控制规则的自动生成和隶属函数参数的自动整定.设计过程无需系统的先验知识和训练数据,具有自组织、自学习的特点.仿真结果表明,该方法优化得到的模糊控制器结构简单、性能优良.  相似文献   

6.
主要研究自动生成数据挖掘算法的解决方案.采用遗传算法对数据挖掘中的关联规则进行自动挖掘,提出一种基于遗传算法的关联规则自动提取算法,并结合电视购物项目,给出了该算法的实例验证.最后,通过与传统的Apriori算法比较,验证了该算法的高效性.  相似文献   

7.
神经模糊控制在船舶自动舵中的应用   总被引:4,自引:0,他引:4  
针对常规模糊自动舵由于受船舶控制过程的非线性、时变性以及风浪干扰等因素影响,模糊控制规则和隶属函数需要校正,利用神经网络的自学习能力,用神经网络去实现模糊控制,设计自动舵神经模糊控制器,采用BP算法和最小二乘算法的混合学习算法实现对模糊规则和隶属函数的参数训练,提高控制器的自适应能力。仿真实验表明所设计的控制器有效可行,适应船舶在风浪干扰环境下的控制性能要求。  相似文献   

8.
一种采用增强式学习的模糊控制系统研究   总被引:2,自引:0,他引:2  
提出一种新的自学习模糊控制系统,该系统有机地集成了BB(Bucket—Brigade)算法和遗传算法,组成一种新的增强式学习(Reinforcement Learning)算法,能够在缺少输入—输出样本集的情况下自动学习生成模糊控制规则,调节隶属度函数。  相似文献   

9.
通过对模糊逻辑控制器设计过程的分析,从总的角度给出了基于模糊逻辑与神经网络技术的智能控制器自动设计系统的详细设计思想及设计过程。并较详细地介绍了该智能控制器生成系统中的几种自学习算法的基本原理及利用与添加方法,最后以梯度学习算法为例验证了智能控制器自动设计系统的有效性。  相似文献   

10.
神经模糊控制在船舶自动舵中的应用   总被引:1,自引:0,他引:1  
针对常规模糊自动舵由于受船舶控制过程的非线性、时变性以及风浪干扰等因素影响,模糊控制规则和隶属函数需要校正,利用神经网络的自学习能力,用神经网络去实现模糊控制,设计自动舵神经模糊控制器,采用BP算法和最小二乘算法的混合学习算法实现对模糊规则和隶属函数的参数训练,提高控制器的自适应能力.仿真实验表明所设计的控制器有效可行.适应船舶在风浪干扰环境下的控制性能要求.  相似文献   

11.
提出一种模糊神经网络的自适应控制方案。针对连续空间的复杂学习任务,提出了一种竞争式Takagi-Sugeno模糊再励学习网络,该网络结构集成了Takagi-Sugeno模糊推理系统和基于动作的评价值函数的再励学习方法。相应地,提出了一种优化学习算法,其把竞争式Takagi-Sugeno模糊再励学习网络训练成为一种所谓的Takagi-Sugeno模糊变结构控制器。以一级倒立摆控制系统为例,仿真研究表明所提出的学习算法在性能上优于其它的再励学习算法。  相似文献   

12.
This paper proposes a combination of online clustering and Q-value based genetic algorithm (GA) learning scheme for fuzzy system design (CQGAF) with reinforcements. The CQGAF fulfills GA-based fuzzy system design under reinforcement learning environment where only weak reinforcement signals such as "success" and "failure" are available. In CQGAF, there are no fuzzy rules initially. They are generated automatically. The precondition part of a fuzzy system is online constructed by an aligned clustering-based approach. By this clustering, a flexible partition is achieved. Then, the consequent part is designed by Q-value based genetic reinforcement learning. Each individual in the GA population encodes the consequent part parameters of a fuzzy system and is associated with a Q-value. The Q-value estimates the discounted cumulative reinforcement information performed by the individual and is used as a fitness value for GA evolution. At each time step, an individual is selected according to the Q-values, and then a corresponding fuzzy system is built and applied to the environment with a critic received. With this critic, Q-learning with eligibility trace is executed. After each trial, GA is performed to search for better consequent parameters based on the learned Q-values. Thus, in CQGAF, evolution is performed immediately after the end of one trial in contrast to general GA where many trials are performed before evolution. The feasibility of CQGAF is demonstrated through simulations in cart-pole balancing, magnetic levitation, and chaotic system control problems with only binary reinforcement signals.  相似文献   

13.
提出一种模糊神经网络的自适应控制方案。针对连续空间的复杂学习任务,提出了一种竞争式Takagi—Sugeno模糊再励学习网络,该网络结构集成了Takagi-Sugeno模糊推理系统和基于动作的评价值函数的再励学习方法。相应地,提出了一种优化学习算法,其把竞争式Takagi-Sugeno模糊再励学习网络训练成为一种所谓的Takagi-Sugeno模糊变结构控制器。以一级倒立摆控制系统为例.仿真研究表明所提出的学习算法在性能上优于其它的再励学习算法。  相似文献   

14.
This paper proposes a three-layered parallel fuzzy inference model called reinforcement fuzzy neural network with distributed prediction scheme (RFNN-DPS), which performs reinforcement learning with a novel distributed prediction scheme. In RFNN-DPS, an additional predictor for predicting the external reinforcement signal is not necessary, and the internal reinforcement information is distributed into fuzzy rules (rule nodes). Therefore, using RFNN-DPS, only one network is needed to construct a fuzzy logic system with the abilities of parallel inference and reinforcement learning. Basically, the information for prediction in RFNN-DPS is composed of credit values stored in fuzzy rule nodes, where each node holds a credit vector to represent the reliability of the corresponding fuzzy rule. The credit values are not only accessed for predicting external reinforcement signals, but also provide a more profitable internal reinforcement signal to each fuzzy rule itself. RFNN-DPS performs a credit-based exploratory algorithm to adjust its internal status according to the internal reinforcement signal. During learning, the RFNN-DPS network is constructed by a single-step or multistep reinforcement learning algorithm based on the ART concept. According to our experimental results, RFNN-DPS shows the advantages of simple network structure, fast learning speed, and explicit representation of rule reliability.  相似文献   

15.
竞争式Takagi-Sugeno模糊再励学习   总被引:4,自引:0,他引:4  
针对连续空间的复杂学习任务,提出了一种竞争式Takagi-Sugeno模糊再励学习网络 (CTSFRLN),该网络结构集成了Takagi-Sugeno模糊推理系统和基于动作的评价值函数的再 励学习方法.文中相应提出了两种学习算法,即竞争式Takagi-Sugeno模糊Q-学习算法和竞争 式Takagi-Sugeno模糊优胜学习算法,其把CTSFRLN训练成为一种所谓的Takagi-Sugeno模 糊变结构控制器.以二级倒立摆控制系统为例,仿真研究表明所提出的学习算法在性能上优于 其它的再励学习算法.  相似文献   

16.
This paper proposes the design of fuzzy controllers by ant colony optimization (ACO) incorporated with fuzzy-Q learning, called ACO-FQ, with reinforcements. For a fuzzy inference system, we partition the antecedent part a priori and then list all candidate consequent actions of the rules. In ACO-FQ, the tour of an ant is regarded as a combination of consequent actions selected from every rule. Searching for the best one among all combinations is partially based on pheromone trail. We assign to each candidate in the consequent part of the rule a corresponding Q-value. Update of the Q-value is based on fuzzy-Q learning. The best combination of consequent values of a fuzzy inference system is searched according to pheromone levels and Q-values. ACO-FQ is applied to three reinforcement fuzzy control problems: (1) water bath temperature control; (2) magnetic levitation control; and (3) truck backup control. Comparisons with other reinforcement fuzzy system design methods verify the performance of ACO-FQ.  相似文献   

17.

Unit commitment problem (UCP) aims at optimizing generation cost for meeting a given load demand under several operational constraints. We propose to use fuzzy reinforcement learning (RL) approach for efficient and reliable solution to the unit commitment problem. In particular, we cast UCP as a multiagent fuzzy reinforcement learning task wherein individual generators act as players for optimizing the cost to meet a given load over a twenty-four-hour period. Unit commitment task has been fuzzified, and the most optimal unit commitment solution is generated by employing RL on this fuzzy multigenerator setup. Our proposed multiagent RL framework does not assume any a priori task or system knowledge, and the generators gradually learn to produce most optimal output solely based on their collective generation. We look at the UCP as a sequential decision-making task with reward/penalty to reduce the collective generation cost of generators. To the best of our knowledge, ours is a first attempt at solving UCP by employing fuzzy reinforcement learning. We test our approach on a ten-generating-unit system with several equality and inequality constraints. Simulation results and comparisons against several recent UCP solution methods prove superiority and viability of our proposed multiagent fuzzy reinforcement learning technique.

  相似文献   

18.
This article presents a new method for learning and tuning a fuzzy logic controller automatically. A reinforcement learning and a genetic algorithm are used in conjunction with a multilayer neural network model of a fuzzy logic controller, which can automatically generate the fuzzy control rules and refine the membership functions at the same time to optimize the final system's performance. In particular, the self-learning and tuning fuzzy logic controller based on genetic algorithms and reinforcement learning architecture, which is called a Stretched Genetic Reinforcement Fuzzy Logic Controller (SGRFLC), proposed here, can also learn fuzzy logic control rules even when only weak information, such as a binary target of “success” or “failure” signal, is available. We extend the AHC algorithm of Barto, Sutton, and Anderson to include the prior control knowledge of human operators. It is shown that the system can solve a fairly difficult control learning problem more concretely, the task is a cart–pole balancing system, in which a pole is hinged to a movable cart to which a continuously variable control force is applied. © 1997 John Wiley & Sons, Inc.  相似文献   

19.
Learning and tuning fuzzy logic controllers through reinforcements   总被引:18,自引:0,他引:18  
A method for learning and tuning a fuzzy logic controller based on reinforcements from a dynamic system is presented. It is shown that: the generalized approximate-reasoning-based intelligent control (GARIC) architecture learns and tunes a fuzzy logic controller even when only weak reinforcement, such as a binary failure signal, is available; introduces a new conjunction operator in computing the rule strengths of fuzzy control rules; introduces a new localized mean of maximum (LMOM) method in combining the conclusions of several firing control rules; and learns to produce real-valued control actions. Learning is achieved by integrating fuzzy inference into a feedforward network, which can then adaptively improve performance by using gradient descent methods. The GARIC architecture is applied to a cart-pole balancing system and demonstrates significant improvements in terms of the speed of learning and robustness to changes in the dynamic system's parameters over previous schemes for cart-pole balancing.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号