Data reduction algorithm based on principle of distributional equivalence for fault diagnosis |
| |
Authors: | Ketan P. Detroja Ravindra D. Gudi Sachin C. Patwardhan |
| |
Affiliation: | 1. Department of Electrical Engineering, Indian Institute of Technology Hyderabad, Yeddumailaram, Medak 502205, India;2. Department of Chemical Engineering, Indian Institute of Technology Bombay, Powai, Mumbai 400076, India |
| |
Abstract: | Historical data based fault diagnosis methods exploit two key strengths of multivariate statistical approaches, viz.: (i) data compression ability, and (ii) discriminatory ability. It has been shown that correspondence analysis (CA) is superior to principal components analysis (PCA) on both these counts (Detroja, Gudi, Patwardhan, & Roy, 2006a), and hence is more suited for the task of fault detection and isolation (FDI). In this paper, we propose a CA based methodology for fault diagnosis that can facilitate significant data reduction as well as better discrimination. The proposed methodology is based on the principle of distributional equivalence (PDE). The PDE is a property unique to the CA algorithm and can be very useful in analyzing large datasets. The principle, when applied to historical data sets for FDI, can significantly reduce the data matrix size without significantly affecting the discriminatory ability of the CA algorithm. This can significantly reduce computational load during statistical model building. The data reduction ability of the proposed methodology is demonstrated using a simulation case study involving benchmark quadruple tank laboratory process. The proposed methodology when applied to experimental data obtained from the quadruple tank process also demonstrated data reduction capabilities of the principle of distributional equivalence. The above aspect has also been validated for large-scale data sets using the benchmark Tennessee Eastman process simulation case study. |
| |
Keywords: | |
本文献已被 ScienceDirect 等数据库收录! |
|