首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 593 毫秒
1.
Deep learning models have achieved high performance across different domains, such as medical decision-making, autonomous vehicles, decision support systems, among many others. However, despite this success, the inner mechanisms of these models are opaque because their internal representations are too complex for a human to understand. This opacity makes it hard to understand the how or the why of the predictions of deep learning models.There has been a growing interest in model-agnostic methods that make deep learning models more transparent and explainable to humans. Some researchers recently argued that for a machine to achieve human-level explainability, this machine needs to provide human causally understandable explanations, also known as causability. A specific class of algorithms that have the potential to provide causability are counterfactuals.This paper presents an in-depth systematic review of the diverse existing literature on counterfactuals and causability for explainable artificial intelligence (AI). We performed a Latent Dirichlet topic modelling analysis (LDA) under a Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) framework to find the most relevant literature articles. This analysis yielded a novel taxonomy that considers the grounding theories of the surveyed algorithms, together with their underlying properties and applications to real-world data.Our research suggests that current model-agnostic counterfactual algorithms for explainable AI are not grounded on a causal theoretical formalism and, consequently, cannot promote causability to a human decision-maker. Furthermore, our findings suggest that the explanations derived from popular algorithms in the literature provide spurious correlations rather than cause/effects relationships, leading to sub-optimal, erroneous, or even biased explanations. Thus, this paper also advances the literature with new directions and challenges on promoting causability in model-agnostic approaches for explainable AI.  相似文献   

2.
目的 模式识别中,通常使用大量标注数据和有效的机器学习算法训练分类器应对不确定性问题。然而,这一过程缺乏知识表征和可解释性。认知心理学和实验心理学的研究表明,人类往往不使用代价如此巨大的机制,而是使用表征、归纳、推理、解释和约束传播等与符号主义人工智能方法类似的手段来应对物体识别中的不确定性并提供可解释性。因此本文旨在从传统的符号计算出发,利用骨架拓扑结构表征提供一种可解释性的思路。方法 以骨架树为基本手段来形成物体拓扑结构特征和几何特征的形式化表征,并基于泛化框架对少量同类表征进行知识抽取来形成关于物体类别的知识概括显式化表征。结果 在形成物体类别的概括表征实验中,通过路径重建直观展示了同类属物体上得到的最一般表征的几何物理意义。在可解释性验证实验中,通过跨数据的拓扑应用展示了新测试样本相对于概括表征的特定差异,表明该表征具有良好的可解释性。最后在形状补全的不确定性推理实验中,不仅可以得到识别结论,而且清晰展示了识别背后做出的判断依据,进一步验证了该表征的可解释性。结论 实验表明一般化的形式表征能够应对尺寸、颜色和形状等不确定性问题,本文方法避免了基于纹理特征所带来的不确定性,适用于任意基于基元的表征方式,具有更好的鲁棒性、普适性和可解释性,计算代价更小。  相似文献   

3.
The use and creation of machine-learning-based solutions to solve problems or reduce their computational costs are becoming increasingly widespread in many domains. Deep Learning plays a large part in this growth. However, it has drawbacks such as a lack of explainability and behaving as a black-box model. During the last few years, Visual Analytics has provided several proposals to cope with these drawbacks, supporting the emerging eXplainable Deep Learning field. This survey aims to (i) systematically report the contributions of Visual Analytics for eXplainable Deep Learning; (ii) spot gaps and challenges; (iii) serve as an anthology of visual analytical solutions ready to be exploited and put into operation by the Deep Learning community (architects, trainers and end users) and (iv) prove the degree of maturity, ease of integration and results for specific domains. The survey concludes by identifying future research challenges and bridging activities that are helpful to strengthen the role of Visual Analytics as effective support for eXplainable Deep Learning and to foster the adoption of Visual Analytics solutions in the eXplainable Deep Learning community. An interactive explorable version of this survey is available online at https://aware-diag-sapienza.github.io/VA4XDL .  相似文献   

4.
一种改进的规则知识获取方法   总被引:1,自引:0,他引:1  
知识获取是建立专家系统的最基本最重要的过程,但它又是研制和开发专家系统的“瓶颈”。文章提出了一种改进的规则知识机器自动获取技术,它将学习看作是在一个符号描述空间中的启发式搜索过程,能够通过归纳从专家决策的例子中确定决策规则,从而大大简化了从专家到机器的知识转换过程。  相似文献   

5.
Over the years we have developed the Disciple theory, methodology, and family of tools for building knowledge-based agents. This approach consists of developing an agent shell that can be taught directly by a subject matter expert in a way that resembles how the expert would teach a human apprentice when solving problems in cooperation. This paper presents the most recent version of the Disciple approach and its implementation in the Disciple–RKF (rapid knowledge formation) system. Disciple–RKF is based on mixed-initiative problem solving , where the expert solves the more creative parts of the problem and the agent solves the more routine ones, integrated teaching and learning , where the agent helps the expert to teach it, by asking relevant questions, and the expert helps the agent to learn, by providing examples, hints, and explanations, and multistrategy learning , where the agent integrates multiple learning strategies, such as learning from examples, learning from explanations, and learning by analogy, to learn from the expert how to solve problems. Disciple–RKF has been applied to build learning and reasoning agents for military center of gravity analysis, which are used in several courses at the US Army War College.  相似文献   

6.
Processing of multiple representations in multimedia learning environments is considered to help learners obtain a more complete overview of the domain and gain deeper knowledge. This is based on the idea that relating and translating different representations leads to reflection beyond the boundaries and details of the separate representations. To achieve this, the design of a learning environment should support learners in adequately processing multiple representations. In this study, we compared a scientific inquiry learning environment providing instructional support with directive self‐explanation prompts to relate and translate between representations with a scientific inquiry learning environment providing instructional support with general self‐explanation prompts. Learners who received the directive prompts outperformed the learners who received general prompts on test items assessing domain knowledge. These positive results did not stretch to transfer items and items measuring learners' capabilities to relate and translate representations in general. The results suggest that learner support should promote the active relation of representations and translation between them to foster domain knowledge, and that other forms of support (e.g. extended training) might be necessary to make learners more expert processors of multiple representations.  相似文献   

7.
Non-destructive testing of welds based on the radiographic image is crucial for improving the reliability of aerospace structural components. The deep learning method represented by the convolutional neural network (CNN) has received extensive attention in welding radiographic image recognition (WRIR) owing to its powerful feature adaptive extraction ability. However, CNN-based WRIR faces key challenges of small sample size and poor explainability. Inspired by the process of interpreting radiographic film by experts, expert knowledge-empowered CNN for WRIR is proposed. Two self-supervised learning (SSL) tasks for radiographic image deblurring and brightness adjustment are designed to model expert experience. The expert knowledge learned from the SSL process is used to guide the CNN to identify weld defects. The results show that the proposed method improves the inductive bias of the CNN model, owns a faster convergence speed and recognition accuracy under the condition of small sample size, and reaches 97.65% of the comprehensive evaluation index F1-score. Moreover, the expert knowledge learned from the SSL process and the decision-making basis of the CNN model are visualized from both global and local aspects, which improve the explainability of CNN-based WRIR.  相似文献   

8.
AI is remarkably successful and outperforms human experts in certain tasks, even in complex domains such as medicine. Humans on the other hand are experts at multi-modal thinking and can embed new inputs almost instantly into a conceptual knowledge space shaped by experience. In many fields the aim is to build systems capable of explaining themselves, engaging in interactive what-if questions. Such questions, called counterfactuals, are becoming important in the rising field of explainable AI (xAI). Our central hypothesis is that using conceptual knowledge as a guiding model of reality will help to train more explainable, more robust and less biased machine learning models, ideally able to learn from fewer data. One important aspect in the medical domain is that various modalities contribute to one single result. Our main question is “How can we construct a multi-modal feature representation space (spanning images, text, genomics data) using knowledge bases as an initial connector for the development of novel explanation interface techniques?”. In this paper we argue for using Graph Neural Networks as a method-of-choice, enabling information fusion for multi-modal causability (causability – not to confuse with causality – is the measurable extent to which an explanation to a human expert achieves a specified level of causal understanding). The aim of this paper is to motivate the international xAI community to further work into the fields of multi-modal embeddings and interactive explainability, to lay the foundations for effective future human–AI interfaces. We emphasize that Graph Neural Networks play a major role for multi-modal causability, since causal links between features can be defined directly using graph structures.  相似文献   

9.
This paper describes a new method of knowledge acquisition for expert systems. A program, KABCO, interacts with a domain expert and learns how to make examples of a concept. This is done by displaying examples based upon KABCO's partial knowledge of the domain and accepting corrections from the expert. When the expert judges that KABCO has learnt the domain completely a large number of examples are generated and given to a standard machine learning program that learns the actual expert system rules. KABCO vastly eases the task of constructing an expert system using machine learning programs because it allows expert system rule bases to be learnt from a mixture of general (rules) and specific (examples) information. At present KABCO can only be used for classification domains but work is proceedings to extend it to be useful for other domains. KABCO learns disjunctive concepts (represented by frames) by modifying an internal knowledge base to remain consistent with all the corrections that have been entered by the expert. KABCO's incremental learning uses the deductive processes of modification, exclusion, subsumption and generalization. The present implementation is primitive, especially the user interface, but work is proceeding to make KABCO a much more advanced knowledge engineering tool.  相似文献   

10.
Explainable Artificial Intelligence (XAI) has experienced a significant growth over the last few years. This is due to the widespread application of machine learning, particularly deep learning, that has led to the development of highly accurate models that lack explainability and interpretability. A plethora of methods to tackle this problem have been proposed, developed and tested, coupled with several studies attempting to define the concept of explainability and its evaluation. This systematic review contributes to the body of knowledge by clustering all the scientific studies via a hierarchical system that classifies theories and notions related to the concept of explainability and the evaluation approaches for XAI methods. The structure of this hierarchy builds on top of an exhaustive analysis of existing taxonomies and peer-reviewed scientific material. Findings suggest that scholars have identified numerous notions and requirements that an explanation should meet in order to be easily understandable by end-users and to provide actionable information that can inform decision making. They have also suggested various approaches to assess to what degree machine-generated explanations meet these demands. Overall, these approaches can be clustered into human-centred evaluations and evaluations with more objective metrics. However, despite the vast body of knowledge developed around the concept of explainability, there is not a general consensus among scholars on how an explanation should be defined, and how its validity and reliability assessed. Eventually, this review concludes by critically discussing these gaps and limitations, and it defines future research directions with explainability as the starting component of any artificial intelligent system.  相似文献   

11.
Symbolic regression is a machine learning task: given a training dataset with features and targets, find a symbolic function that best predicts the target given the features. This paper concentrates on dynamic regression tasks, i.e. tasks where the goal changes during the model fitting process. Our study is motivated by dynamic regression tasks originating in the domain of reinforcement learning: we study four dynamic symbolic regression problems related to well-known reinforcement learning benchmarks, with data generated from the standard Value Iteration algorithm. We first show that in these problems the target function changes gradually, with no abrupt changes. Even these gradual changes, however, are a challenge to traditional Genetic Programming-based Symbolic Regression algorithms because they rely only on expression manipulation and selection. To address this challenge, we present an enhancement to such algorithms suitable for dynamic scenarios with gradual changes, namely the recently introduced type of leaf nodes called Linear Combination of Features. This type of leaf node, aided by the error backpropagation technique known from artificial neural networks, enables the algorithm to better fit the data by utilizing the error gradient to its advantage rather than searching blindly using only the fitness values. This setup is compared with a baseline of the core algorithm without any of our improvements and also with a classic evolutionary dynamic optimization technique: hypermutation. The results show that the proposed modifications greatly improve the algorithm ability to track a gradually changing target.  相似文献   

12.
Online learning of complex control behaviour of autonomous mobile robots like walking machines is one of the current research topics. In this article, a hybrid learning architecture based on reinforcement learning (RL) and self-organizing neural networks for online adaptivity is presented. The hybrid concept integrates different learning methods and task-oriented representations as well as available domain knowledge. The proposed concept is used for RL of control strategies on different control levels on a walking machine.  相似文献   

13.
Open ontology learning is the process of extracting a domain ontology from a knowledge source in an unsupervised way. Due to its unsupervised nature, it requires filtering mechanisms to rate the importance and correctness of the extracted knowledge. This paper presents OntoCmaps, a domain-independent and open ontology learning tool that extracts deep semantic representations from corpora. OntoCmaps generates rich conceptual representations in the form of concept maps and proposes an innovative filtering mechanism based on metrics from graph theory. Our results show that using metrics such as Betweenness, PageRank, Hits and Degree centrality outperforms the results of standard text-based metrics (TF-IDF, term frequency) for concept identification. We propose voting schemes based on these metrics that provide a good performance in relationship identification, which again provides better results (in terms of precision and F-measure) than other traditional metrics such as frequency of co-occurrences. The approach is evaluated against a gold standard and is compared to the ontology learning tool Text2Onto. The OntoCmaps generated ontology is more expressive than Text2Onto ontology especially in conceptual relationships and leads to better results in terms of precision, recall and F-measure.  相似文献   

14.
Data stream learning has been largely studied for extracting knowledge structures from continuous and rapid data records. As data is evolving on a temporal basis, its underlying knowledge is subject to many challenges. Concept drift,1 as one of core challenge from the stream learning community, is described as changes of statistical properties of the data over time, causing most of machine learning models to be less accurate as changes over time are in unforeseen ways. This is particularly problematic as the evolution of data could derive to dramatic change in knowledge. We address this problem by studying the semantic representation of data streams in the Semantic Web, i.e., ontology streams. Such streams are ordered sequences of data annotated with ontological vocabulary. In particular we exploit three levels of knowledge encoded in ontology streams to deal with concept drifts: i) existence of novel knowledge gained from stream dynamics, ii) significance of knowledge change and evolution, and iii) (in)consistency of knowledge evolution. Such knowledge is encoded as knowledge graph embeddings through a combination of novel representations: entailment vectors, entailment weights, and a consistency vector. We illustrate our approach on classification tasks of supervised learning. Key contributions of the study include: (i) an effective knowledge graph embedding approach for stream ontologies, and (ii) a generic consistent prediction framework with integrated knowledge graph embeddings for dealing with concept drifts. The experiments have shown that our approach provides accurate predictions towards air quality in Beijing and bus delay in Dublin with real world ontology streams.  相似文献   

15.
ContextOne of the most important factors in the development of a software project is the quality of their requirements. Erroneous requirements, if not detected early, may cause many serious problems, such as substantial additional costs, failure to meet the expected objectives and delays in delivery dates. For these reasons, great effort must be devoted in requirements engineering to ensure that the project’s requirements results are of high quality. One of the aims of this discipline is the automatic processing of requirements for assessing their quality; this aim, however, results in a complex task because the quality of requirements depends mostly on the interpretation of experts and the necessities and demands of the project at hand.ObjectiveThe objective of this paper is to assess the quality of requirements automatically, emulating the assessment that a quality expert of a project would assess.MethodThe proposed methodology is based on the idea of learning based on standard metrics that represent the characteristics that an expert takes into consideration when deciding on the good or bad quality of requirements. Using machine learning techniques, a classifier is trained with requirements earlier classified by the expert, which then is used for classifying newly provided requirements.ResultsWe present two approaches to represent the methodology with two situations of the problem in function of the requirement corpus learning balancing, obtaining different results in the accuracy and the efficiency in order to evaluate both representations. The paper demonstrates the reliability of the methodology by presenting a case study with requirements provided by the Requirements Working Group of the INCOSE organization.ConclusionsA methodology that evaluates the quality of requirements written in natural language is presented in order to emulate the quality that the expert would provide for new requirements, with 86.1 of average in the accuracy.  相似文献   

16.
Explainable Artificial Intelligence (XAI) is an emerging research topic of machine learning aimed at unboxing how AI systems’ black-box choices are made. This research field inspects the measures and models involved in decision-making and seeks solutions to explain them explicitly. Many of the machine learning algorithms cannot manifest how and why a decision has been cast. This is particularly true of the most popular deep neural network approaches currently in use. Consequently, our confidence in AI systems can be hindered by the lack of explainability in these black-box models. The XAI becomes more and more crucial for deep learning powered applications, especially for medical and healthcare studies, although in general these deep neural networks can return an arresting dividend in performance. The insufficient explainability and transparency in most existing AI systems can be one of the major reasons that successful implementation and integration of AI tools into routine clinical practice are uncommon. In this study, we first surveyed the current progress of XAI and in particular its advances in healthcare applications. We then introduced our solutions for XAI leveraging multi-modal and multi-centre data fusion, and subsequently validated in two showcases following real clinical scenarios. Comprehensive quantitative and qualitative analyses can prove the efficacy of our proposed XAI solutions, from which we can envisage successful applications in a broader range of clinical questions.  相似文献   

17.
采用基于人工智能的故障诊断专家系统方法,附以模糊数学、神经网络、机器学习、数据库等理论,解决故障诊断中知识的合理表达,基于符号和数值的多种快速推理机制,知识的自动获取及知识库智能化管理等关键技术,建立了一个智能模糊故障诊断专家系统。  相似文献   

18.
Automated knowledge acquisition is an important research issue in machine learning. Several methods of inductive learning, such as ID3 family and AQ family, have been applied to discover meaningful knowledge from large databases and their usefulness is assured in several aspects. However, since their methods are of a deterministic nature and the reliability of acquired knowledge is not evaluated statistically, these methods are ineffective when applied to domains essentially probabilistic in nature, such as medical domains. Extending concepts of rough set theory to a probabilistic domain, we introduce a new approach to knowledge acquisition, which induces probabilistic rules based on rough set theory (PRIMEROSE) and develop a program that extracts rules for an expert system from a clinical database, using this method. The results show that the derived rules almost correspond to those of the medical experts.  相似文献   

19.
A rule-based expert system is demonstrated to have both a symbolic computational network representation and a sub-symbolic connectionist representation. These alternate views enhance the usefulness of the original system by facilitating introduction of connectionist learning methods into the symbolic domain. The connectionist representation learns and stores metaknowledge in highly connected subnetworks and domain knowledge in a sparsely connected expert network superstructure. The total connectivity of the neural network representation approximates that of real neural systems and hence avoids scaling and memory stability problems associated with other connectionist models.Paper given to the symposiumApproaches to Cognition, the fifteenth annual Symposium in Philosophy held at the University of North Carolina, Greensboro, April 5–7, 1991.Research partially supported by the US Office of Naval Research and the Florida High Technology and Industry Council.  相似文献   

20.
THE USEFULNESS OF A MACHINE LEARNING APPROACH TO KNOWLEDGE ACQUISITION   总被引:5,自引:0,他引:5  
This paper presents results of experiments showing how machine learning methods arc useful for rule induction in the process of knowledge acquisition for expert systems. Four machine learning methods were used: ID3, ID3 with dropping conditions, and two options of the system LERS (Learning from Examples based on Rough Sets): LEM1 and LEM2. Two knowledge acquisition options of LERS were used as well. All six methods were used for rule induction from six real-life data sets. The main objective was to lest how an expert system, supplied with these rule sets, performs without information on a few attributes. Thus an expert system attempts to classify examples with all missing values of some attributes. As a result of experiments, it is clear that all machine learning methods performed much worse than knowledge acquisition options of LERS. Thus, machine learning methods used for knowledge acquisition should be replaced by other methods of rule induction that will generate complete sets of rules. Knowledge acquisition options of LERS are examples of such appropriate ways of inducing rules for building knowledge bases.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号