Dam displacements can effectively reflect its operational status, and thus establishing a reliable displacement prediction model is important for dam health monitoring. The majority of the existing data-driven models, however, focus on static regression relationships, which cannot capture the long-term temporal dependencies and adaptively select the most relevant influencing factors to perform predictions. Moreover, the emerging modeling tools such as machine learning (ML) and deep learning (DL) are mostly black-box models, which makes their physical interpretation challenging and greatly limits their practical engineering applications. To address these issues, this paper proposes an interpretable mixed attention mechanism long short-term memory (MAM-LSTM) model based on an encoder-decoder architecture, which is formulated in two stages. In the encoder stage, a factor attention mechanism is developed to adaptively select the highly influential factors at each time step by referring to the previous hidden state. In the decoder stage, a temporal attention mechanism is introduced to properly extract the key time segments by identifying the relevant hidden states across all the time steps. For interpretation purpose, our emphasis is placed on the quantification and visualization of factor and temporal attention weights. Finally, the effectiveness of the proposed model is verified using monitoring data collected from a real-world dam, where its accuracy is compared to a classical statistical model, conventional ML models, and homogeneous DL models. The comparison demonstrates that the MAM-LSTM model outperforms the other models in most cases. Furthermore, the interpretation of global attention weights confirms the physical rationality of our attention-based model. This work addresses the research gap in interpretable artificial intelligence for dam displacement prediction and delivers a model with both high-accuracy and interpretability. 相似文献
The use and creation of machine-learning-based solutions to solve problems or reduce their computational costs are becoming increasingly widespread in many domains. Deep Learning plays a large part in this growth. However, it has drawbacks such as a lack of explainability and behaving as a black-box model. During the last few years, Visual Analytics has provided several proposals to cope with these drawbacks, supporting the emerging eXplainable Deep Learning field. This survey aims to (i) systematically report the contributions of Visual Analytics for eXplainable Deep Learning; (ii) spot gaps and challenges; (iii) serve as an anthology of visual analytical solutions ready to be exploited and put into operation by the Deep Learning community (architects, trainers and end users) and (iv) prove the degree of maturity, ease of integration and results for specific domains. The survey concludes by identifying future research challenges and bridging activities that are helpful to strengthen the role of Visual Analytics as effective support for eXplainable Deep Learning and to foster the adoption of Visual Analytics solutions in the eXplainable Deep Learning community. An interactive explorable version of this survey is available online at https://aware-diag-sapienza.github.io/VA4XDL . 相似文献
With the increasing deployment of deep learning-based systems in various scenes,it is becoming important to conduct sufficient testing and evaluation of deep learning models to improve their interpretability and robustness.Recent studies have proposed different criteria and strategies for deep neural network(DNN)testing.However,they rarely conduct effective testing on the robustness of DNN models and lack interpretability.This paper proposes a new priority testing criterion,called DeepLogic,to analyze the robustness of the DNN models from the perspective of model interpretability.We first define the neural units in DNN with the highest average activation probability as\"interpretable logic units\".We analyze the changes in these units to evaluate the model's robustness by conducting adversarial attacks.After that,the interpretable logic units of the inputs are taken as context attri-butes,and the probability distribution of the softmax layer in the model is taken as internal attributes to establish a comprehensive test prioritization framework.The weight fusion of context and internal factors is carried out,and the test cases are sorted according to this priority.The experimental results on four popular DNN models using eight testing metrics show that our DeepLogic significantly outperforms existing state-of-the-art methods. 相似文献
AbstractIn this paper, we propose the use of subspace clustering to detect the states of dynamical systems from sequences of observations. In particular, we generate sparse and interpretable models that relate the states of aquatic drones involved in autonomous water monitoring to the properties (e.g., statistical distribution) of data collected by drone sensors. The subspace clustering algorithm used is called SubCMedians. A quantitative experimental analysis is performed to investigate the connections between i) learning parameters and performance, ii) noise in the data and performance. The clustering obtained with this analysis outperforms those generated by previous approaches. 相似文献