论文标题

量化图神经网络中的息肉和认知不确定性的一般框架

A General Framework for quantifying Aleatoric and Epistemic uncertainty in Graph Neural Networks

论文作者

Munikoti, Sai, Agarwal, Deepesh, Das, Laya, Natarajan, Balasubramaniam

论文摘要

图形神经网络(GNN)提供了一个强大的框架,该框架将图理论与机器学习相结合,用于建模和分析网络数据。我们考虑了量化由建模误差和测量不确定性引起的GNN预测不确定性的问题。我们考虑了节点特征向量的概率链接和噪声的形式的差异不确定性,而认知不确定性是通过模型参数的概率分布纳入的。我们提出了一种统一的方法来治疗贝叶斯框架中的两个不确定性来源,在该框架中,假定的密度滤波用于量化出现的不确定性,而蒙特卡洛辍学物则捕获了模型参数中的不确定性。最后,两种不确定性来源是汇总的,以估计GNN预测的总不确定性。现实世界数据集中的结果表明,贝叶斯模型与频繁模型相同,并提供有关预测不确定性的其他信息,这些信息对数据和模型中的不确定性敏感。

Graph Neural Networks (GNN) provide a powerful framework that elegantly integrates Graph theory with Machine learning for modeling and analysis of networked data. We consider the problem of quantifying the uncertainty in predictions of GNN stemming from modeling errors and measurement uncertainty. We consider aleatoric uncertainty in the form of probabilistic links and noise in feature vector of nodes, while epistemic uncertainty is incorporated via a probability distribution over the model parameters. We propose a unified approach to treat both sources of uncertainty in a Bayesian framework, where Assumed Density Filtering is used to quantify aleatoric uncertainty and Monte Carlo dropout captures uncertainty in model parameters. Finally, the two sources of uncertainty are aggregated to estimate the total uncertainty in predictions of a GNN. Results in the real-world datasets demonstrate that the Bayesian model performs at par with a frequentist model and provides additional information about predictions uncertainty that are sensitive to uncertainties in the data and model.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源