ACL2020有关可解释性的文章
作者:互联网
DTCA: Decision Tree-based Co-Attention Networks for Explainable Claim Verification
- 摘要:Recently, many methods discover effective evidence from reliable sources by appropriate neural networks for explainable claim verification, which has been widely recognized. However, in these methods, the discovery process of evidence is nontransparent and unexplained. Simultaneously, the discovered evidence only roughly aims at the interpretability of the whole sequence of claims but insufficient to focus on the false parts of claims.In this paper, we propose a Decision Tree-based Co-Attention model (DTCA) to discover evidence for explainable claim verification. Specifically, we first construct Decision Tree-based Evidence model (DTE) to select comments
with high credibility as evidence in a transparent and interpretable way. Then we design Co-attention Self-attention networks (CaSa)
to make the selected evidence interact with claims, which is for 1) training DTE to determine the optimal decision thresholds and
obtain more powerful evidence; and 2) utilizing the evidence to find the false parts in the claim. Experiments on two public datasets,
RumourEval and PHEME, demonstrate that DTCA not only provides explanations for the results of claim verification but also achieves
the state-of-the-art performance, boosting the F1-score by 3.11%, 2.41%, respectively.
近年来,许多方法通过适当的神经网络从可靠的来源中发现有效的证据,并得到了广泛的认可。然而,在这些方法中,证据的发现过程是不透明和无法解释的。同时,所发现的证据只是粗略地针对整个主张序列的可解释性,而不足以集中于主张的虚假部分。在本文中,我们提出了一个基于决策树的注意模型(DTCA)来发现可解释索赔的证据。具体而言,我们首先构建基于决策树的证据模型(DTE),以透明和可解释的方式选择高可信度的评论作为证据。然后设计共同注意自我注意网络(CaSa),使所选择的证据与声明相互作用,其目的是:1)训练数据终端确定最优决策阈值,获得更有力的证据;2)利用证据发现索赔中的虚假部分。在RumourEval和PHEME两个公共数据集上的实验表明,DTCA不仅对索赔验证结果进行了解释,而且达到了最先进的性能,f1得分分别提高了3.11%和2.41%。
Evaluating Explainable AI: Which Algorithmic Explanations Help Users Predict Model Behavior?
- 摘要:Algorithmic approaches to interpreting machine learning models have proliferated in recent years. We carry out human subject tests that are the first of their kind to isolate the effect of algorithmic explanations on a key aspect of model interpretability, simulatability,
while avoiding important confounding experimental factors. A model is simulatable when a person can predict its behavior on new inputs. Through two kinds of simulation tests involving text and tabular data, we evaluate five explanations methods:(1) LIME, (2) Anchor,(3) Decision Boundary, (4) a Prototype model,and (5) a omposite approach that combines explanations from each method.Clear evidence of method effectiveness is found in very few cases: LIME improves simulatability in tabular classification, and our Prototype method is effective in counterfactual simulation tests. We also collect subjective ratings of explanations, but we do not find that ratings are predictive of how helpful explanations are. Our results provide the first reliable and comprehensive estimates of how explanations influence simulatability across a variety of explanationmethodsanddatadomains. We show that (1) we need to be careful about the metrics
we use to evaluate explanation methods, and (2) there is significant room for improvement in current methods.
近年来,解释机器学习模型的算法方法激增。我们进行了人类受试者测试,这是同类测试中第一次将算法解释对模型可解释性、可模拟性的关键方面的影响分离出来,同时避免了重要的混杂实验因素。当人们可以预测模型在新输入时的行为时,模型是可模拟的。通过两种文本数据和表格数据的模拟测试,我们评价了五种解释方法:(1) LIME, (2) Anchor, (3) Decision Boundary, (4) Prototype model, (5) Composite approach,该方法结合了每种方法的解释。我们的原型方法在反事实模拟测试中是有效的。我们也收集解释的主观评级,但我们发现评级并不能预测解释的帮助程度。我们的结果为解释如何影响各种解释方法和数据域的模拟性提供了第一个可靠和全面的估计。说明:(1)我们需要谨慎对待我们用来评估解释方法的指标,(2)目前的方法有很大的改进空间。
标签:解释,DTCA,解释性,evidence,文章,model,ACL2020,explanations,证据 来源: https://blog.csdn.net/weixin_38072029/article/details/114818470