论文标题
评估可解释的方法进行预测过程分析:一种功能基础的方法
Evaluating Explainable Methods for Predictive Process Analytics: A Functionally-Grounded Approach
论文作者
论文摘要
预测过程分析的重点是预测业务流程运行实例的未来状态。尽管已使用先进的机器学习技术来提高预测的准确性,但由此产生的预测模型缺乏透明度。当前可解释的机器学习方法(例如石灰和外形)可用于解释黑匣子模型。但是,目前尚不清楚这些方法在解释过程预测模型中有多么适合。在本文中,我们借鉴了可解释的AI领域中使用的评估措施,并提出了具有功能基础的评估指标,用于评估预测过程分析中的可解释方法。我们将提出的指标应用于解释基于XGBoost构建的过程预测模型的石灰和外形的性能,该模型已被证明在过程预测中相对准确。我们使用三个开源,现实世界事件日志进行评估,并分析评估结果以获取见解。该研究有助于理解可解释方法的可信度,以作为预测过程分析,这是迈向以用户为导向的评估的基本和关键步骤。
Predictive process analytics focuses on predicting the future states of running instances of a business process. While advanced machine learning techniques have been used to increase accuracy of predictions, the resulting predictive models lack transparency. Current explainable machine learning methods, such as LIME and SHAP, can be used to interpret black box models. However, it is unclear how fit for purpose these methods are in explaining process predictive models. In this paper, we draw on evaluation measures used in the field of explainable AI and propose functionally-grounded evaluation metrics for assessing explainable methods in predictive process analytics. We apply the proposed metrics to evaluate the performance of LIME and SHAP in interpreting process predictive models built on XGBoost, which has been shown to be relatively accurate in process predictions. We conduct the evaluation using three open source, real-world event logs and analyse the evaluation results to derive insights. The research contributes to understanding the trustworthiness of explainable methods for predictive process analytics as a fundamental and key step towards human user-oriented evaluation.