首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 140 毫秒
1.
试用离线训练的神经网络进行导航传感器故障检测。首先用从某船试航时的一段数据中选出的包含多种航行状态的数千组平台罗经读数训练神经网络并同时选择神经网络的输入延迟数和隐层单元数。然后用已选择好结构并训练好的神经网络作为在线估计器对平台罗经的读数进行一步预测。最后根据平台罗经的读数与其对应的预测值之间的差值进行故障检测。分别用同一艘船同一次试航的两段平台罗经读数进行训练和故障检测仿真,结果证明该方法可行。  相似文献   

2.
针对电磁调速系统的实际特点,提出了一种具有建模不确定性的非线性系统在线故障检测方法.假定该系统仅是输入、输出可测量的,并把故障建模为状态变量和输入变量的函数.文中用一种基于RBF神经网络的在线非线性估计器来跟踪调速系统中出现的故障,从理论上证明了该方法对有建模不确定性的非线性系统的故障检测具备良好的鲁棒性.仿真实例说明了该故障检测方法的有效性.  相似文献   

3.
基于神经网络的非线性系统故障检测及容错控制方法   总被引:8,自引:1,他引:8  
利用神经网络的非线性建模能力,提出了一种非线性系统的故障检测及容错控制方法。在本方法中,首先应用神经网络设计故障估计器,在线估计系统故障向量,实现故障检测;在此基础上,引入补偿控制器,消除故障对系统运行的影响,从而实现容错控制。同时基于Lyapunov方法进行了稳定性分析。  相似文献   

4.
一类基于状态估计的非线性系统的智能故障诊断   总被引:6,自引:0,他引:6  
针对一类含有建模误差的非线性系统,研究了基于状态估计的智能故障诊断方法.首先提出一种状态估计器设计方法;然后在进行状态估计的同时用RBF神经网络来逼近系统所发生的故障.故障估计器的输入为系统的状态估计,所估计出的故障既可用作故障容错控制,也可用作报警.根据微分同胚,将含有建模误差的非线性系统变换为易于分析的规范形式,并在此基础上分析了故障诊断系统的稳定性和鲁棒性.仿真例子证明了该方法的有效性.  相似文献   

5.
惯性测量单元(IMU)作为水下航行器导航系统关键传感器,其可靠性直接影响航行器的导航性能。为了提高IMU的容错能力,本文提出一种基于无迹卡尔曼滤波(UKF)算法的IMU故障诊断技术。首先根据水下航行器的动力学方程和导航系统特点,建立描述IMU故障与导航状态量关系的解析模型;接着基于UKF非线性滤波的特点,进行导航滤波解算,基于此,提出了解耦矩阵法以实现IMU的故障检测;并且根据无迹卡尔曼滤波器新息正交原理,提出了实时估计IMU故障的方法,从而完成水下航行器IMU故障的在线检测与诊断。最后,通过实际航行数据验证了所提出算法的有效性。  相似文献   

6.
潘腾  姜顺  潘丰 《信息与控制》2023,52(1):104-114
针对一类存在执行器故障和部分解耦扰动的离散时间网络化控制系统,研究测量数据随机丢失情况下的主动容错控制问题。首先,通过模型转换将原系统化为一个与之等价的状态增广系统;然后在考虑测量数据发生随机丢失情况下,构造未知输入观测器(unknown input observer, UIO)实现对系统状态与故障的联合估计,再基于状态和故障的在线估计值,设计基于信号补偿的容错控制律实现对原系统的主动容错控制。在该容错控制算法中,观测器与控制器增益的存在性条件均可利用李雅普诺夫稳定性理论对误差系统进行随机分析得到,相应的估计器和控制器参数可通过在线求解具有凸约束的矩阵不等式获得。最后,通过一个喷气式发动机模型的仿真算例验证所提出的故障估计与主动容错控制方法的有效性。  相似文献   

7.
利用神经网络的非线性建模能力,对一类具有建模不确定项的非线性系统提出一种基于观测器的故障检测和诊断的方法。设计的观测器不仅能实现故障检测,而旦应用神经网络设计的故障估计器能在线估计系统中的故障向量。通过分析验证了该方法对系统中的建模误差和外部扰动具有良好的鲁棒性。仿真结果表明所提出的方法是有效的。  相似文献   

8.
针对基于循环神经网络(RNN)的人体运动合成方法存在首帧跳变,进而影响生成运动的质量的问题,提出一种带有隐状态初始化的人体运动合成方法,将初始隐状态作为自变量,利用神经网络的目标函数作为优化目标,并使用梯度下降的方法进行优化求解,以得到一个合适的初始隐状态。相较于编码器-循环-解码器(ERD)、残差门控循环单元(RGRU)模型,所提方法在首帧的预测误差分别减小63.51%和6.90%,10帧的总误差分别减小50.00%和4.89%。实验结果表明,该方法无论是运动合成质量还是运动预测精度都优于不进行初始隐状态估计的方法;它通过准确估计基于RNN的人体运动模型的首帧隐状态可提升运动合成的质量,并且为实时安全监测中的动作识别模型提供可靠的数据支持。  相似文献   

9.
针对一类不确定非线性动态系统,提出了一种基于神经网络在线逼近结构的鲁棒故障 检测方法.该方法通过构造神经网络通过在线逼近结构学习非线性故障特性来监测动态系统 的反常行为,当故障发生时,在线估计器可逼近各种可能的未知故障,然后对其进行诊断和 适应.神经网络权重的在线学习律没有持续激励的要求,并采用Lyapunov稳定性理论保证了 闭环误差系统一致最终有界稳定.  相似文献   

10.
基于神经网络的航空传感器故障检测   总被引:1,自引:0,他引:1  
用离线训练的神经网络进行导航传感器故障检测。首先,用已获得的正常飞行数据通过离线训练的方法训练神经网络并构造估计器的结构,然后用已选择好结构并训练好的神经网络作为估计器对传感器的读数进行一步预测。若预测值与传感器实际值之间的差值仅为递推误差和传感器输出噪声,则认为传感器工作正常,若相应的残差分量显著增大,则认为传感器故障。因此设计了相应的检测策略进行故障检测,以达到既避免不必要的报警、切换,又准确、及时的监测、报警。通过仿真试验验证,结果证明该方法可行。  相似文献   

11.
The cascade correlation is a very flexible, efficient and fast algorithm for supervised learning. It incrementally builds the network by adding hidden units one at a time, until the desired input/output mapping is achieved. It connects all the previously installed units to the new unit being added. Consequently, each new unit in effect adds a new layer and the fan-in of the hidden and output units keeps on increasing as more units get added. The resulting structure could be hard to implement in VLSI, because the connections are irregular and the fan-in is unbounded. Moreover, the depth or the propagation delay through the resulting network is directly proportional to the number of units and can be excessive. We have modified the algorithm to generate networks with restricted fan-in and small depth (propagation delay) by controlling the connectivity. Our results reveal that there is a tradeoff between connectivity and other performance attributes like depth, total number of independent parameters, and learning time.  相似文献   

12.
We investigate empirically the performance under damage conditions of single- and multilayer perceptrons (MLP's), with various numbers of hidden units, in a representative pattern-recognition task. While some degree of graceful degradation was observed, the single-layer perceptron was considerably less fault tolerant than any of the multilayer perceptrons, including one with fewer adjustable weights. Our initial hypothesis that fault tolerance would be significantly improved for multilayer nets with larger numbers of hidden units proved incorrect. Indeed, there appeared to be a liability to having excess hidden units. A simple technique (called augmentation) is described, which was successful in translating excess hidden units into improved fault tolerance. Finally, our results were supported by applying singular value decomposition (SVD) analysis to the MLP's internal representations.  相似文献   

13.
Architecture selection is a very important aspect in the design of neural networks (NNs) to optimally tune performance and computational complexity. Sensitivity analysis has been used successfully to prune irrelevant parameters from feedforward NNs. This paper presents a new pruning algorithm that uses the sensitivity analysis to quantify the relevance of input and hidden units. A new statistical pruning heuristic is proposed, based on the variance analysis, to decide which units to prune. The basic idea is that a parameter with a variance in sensitivity not significantly different from zero, is irrelevant and can be removed. Experimental results show that the new pruning algorithm correctly prunes irrelevant input and hidden units. The new pruning algorithm is also compared with standard pruning algorithms.  相似文献   

14.
Two-Phase Construction of Multilayer Perceptrons Using Information Theory   总被引:2,自引:0,他引:2  
This brief presents a two-phase construction approach for pruning both input and hidden units of multilayer perceptrons (MLPs) based on mutual information (MI). First, all features of input vectors are ranked according to their relevance to target outputs through a forward strategy. The salient input units of an MLP are thus determined according to the order of the ranking result and by considering their contributions to the network's performance. Then, the irrelevant features of input vectors can be identified and eliminated. Second, the redundant hidden units are removed from the trained MLP one after another according to a novel relevance measure. Compared with its related work, the proposed strategy exhibits better performance. Moreover, experimental results show that the proposed method is comparable or even superior to support vector machine (SVM) and support vector regression (SVR). Finally, the advantages of the MI-based method are investigated in comparison with the sensitivity analysis (SA)-based method.  相似文献   

15.
A novel approach is presented to visualize and analyze decision boundaries for feedforward neural networks. First order sensitivity analysis of the neural network output function with respect to input perturbations is used to visualize the position of decision boundaries over input space. Similarly, sensitivity analysis of each hidden unit activation function reveals which boundary is implemented by which hidden unit. The paper shows how these sensitivity analysis models can be used to better understand the data being modelled, and to visually identify irrelevant input and hidden units.  相似文献   

16.
BP神经网络合理隐结点数确定的改进方法   总被引:1,自引:0,他引:1  
合理选择隐含层结点个数是BP神经网络构造中的关键问题,对网络的适应能力、学习速率都有重要的影响.在此提出一种确定隐结点个数的改进方法.该方法基于隐含层神经元输出之间的线性相关关系与线性无关关系,对神经网络隐结点个数进行削减,缩减网络规模.以零件工艺过程中的加工参数作为BP神经网络的输入,加工完成的零件尺寸作为BP神经网络的输出建立模型,把该方法应用于此神经网络模型中,其训练结果证明了该方法的有效性.  相似文献   

17.
There exist redundant, irrelevant and noisy data. Using proper data to train a network can speed up training, simplify the learned structure, and improve its performance. A two-phase training algorithm is proposed. In the first phase, the number of input units of the network is determined by using an information base method. Only those attributes that meet certain criteria for inclusion will be considered as the input to the network. In the second phase, the number of hidden units of the network is selected automatically based on the performance of the network on the training data. One hidden unit is added at a time only if it is necessary. The experimental results show that this new algorithm can achieve a faster learning time, a simpler network and an improved performance.  相似文献   

18.
Artificial neural networks (ANN) using raw electroencephalogram (EEG) data were developed and tested off-line to detect transient epileptiform discharges (spike and spike/wave) and EMG activity in an ongoing EEG. In the present study, a feedforward ANN with a variable number of input and hidden layer units and two output units was used to optimize the detection system. The ANN system was trained and tested with the backpropagation algorithm using a large data set of exemplars. The effects of different EEG time windows and the number of hidden layer neurons were examined using rigorous statistical tests for optimum detection sensitivity and selectivity. The best ANN configuration occurred with an input time window of 150 msec (30 input units) and six hidden layer neurons. This input interval contained information on the wave component of the epileptiform discharge which improved detection. Two-dimensional receiver operating curves were developed to define the optimum threshold parameters for best detection. Comparison with previous networks using raw EEG showed improvement in both sensitivity and selectivity. This study showed that raw EEG can be successfully used to train ANNs to detect epileptogenic discharges with a high success rate without resorting to experimenter-selected parameters which may limit the efficiency of the system.  相似文献   

19.
《Information Fusion》2002,3(4):259-266
We provide several enhancements to our previously introduced algorithm for a sequential construction of a hybrid network of radial and perceptron hidden units [6]. At each stage, the algorithm sub-divides the input space in order to reduce the entropy of the data conditioned on the clusters. The algorithm determines if a radial or a perceptron unit is required at a given region of input space, by using the local likelihood of the model under each unit type. Given an error target, the algorithm also determines the number of hidden units. This results in a final architecture which is often much smaller than an radial basis functions network or an multi-layer perceptron. A benchmark on six classification problems is given. The most striking performance improvement is achieved on the vowel data set [8].  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号