论文标题

深刻的论点解释

Deep Argumentative Explanations

论文作者

Albini, Emanuele, Lertvittayakumjorn, Piyawat, Rago, Antonio, Toni, Francesca

论文摘要

尽管最近,广泛关注可解释的AI(XAI),但通过XAI方法计算的解释倾向于对神经网络(NNS)的功能几乎没有见识。我们提出了一个新颖的框架,用于从NNS获得(本地)解释,同时提供有关其内部工作的透明度,并展示如何将其部署到各种神经体系结构和任务中。我们将新颖的解释共同称为深层论证解释(简而言之的是DAX),因为它们反映了基础NN的深层结构,并且它们是根据计算论证中的概念来定义的,这是一种符号AI的形式,它为解释提供了有用的抽象。我们从经验上评估DAX,表明它们表现出深厚的保真度和低计算成本。我们还进行了人类的实验,表明人类可以理解DAX,并与他们的判断保持一致,同时在用户接受方面也具有竞争力,并采用了一些也具有论证精神的XAI方法。

Despite the recent, widespread focus on eXplainable AI (XAI), explanations computed by XAI methods tend to provide little insight into the functioning of Neural Networks (NNs). We propose a novel framework for obtaining (local) explanations from NNs while providing transparency about their inner workings, and show how to deploy it for various neural architectures and tasks. We refer to our novel explanations collectively as Deep Argumentative eXplanations (DAXs in short), given that they reflect the deep structure of the underlying NNs and that they are defined in terms of notions from computational argumentation, a form of symbolic AI offering useful reasoning abstractions for explanation. We evaluate DAXs empirically showing that they exhibit deep fidelity and low computational cost. We also conduct human experiments indicating that DAXs are comprehensible to humans and align with their judgement, while also being competitive, in terms of user acceptance, with some existing approaches to XAI that also have an argumentative spirit.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源