论文标题

弱信号的信息丰富的贝叶斯神经网络先验

Informative Bayesian Neural Network Priors for Weak Signals

论文作者

Cui, Tianyu, Havulinna, Aki, Marttinen, Pekka, Kaski, Samuel

论文摘要

在神经网络的高维重量空间上,将域知识编码为先前的知识是具有挑战性的,但对于数据有限和信号较弱的应用至关重要。科学应用中通常可以使用两种类型的领域知识:1。特征稀疏性(被认为相关的特征的部分); 2。信号 - 噪声比例,例如,如方差所解释的比例(PVE)。我们展示了如何将两种类型的域知识编码为具有自动相关性确定的广泛使用的高斯刻度混合物。具体而言,我们在局部(特定于特定功能)比例参数上提出了一个新的关节,该参数编码了有关特征稀疏性的知识,以及一个Stein梯度优化来调整超参数的方式,以使模型PVE上引起的分布与先前的分布相匹配。我们从经验上表明,与现有的神经网络先验相比,在几个公开可用的数据集中以及在信号较弱且稀疏的遗传应用程序中,新的先验提高了预测准确性,甚至在计算密集型的交叉验证方面的表现都超过了参数调谐。

Encoding domain knowledge into the prior over the high-dimensional weight space of a neural network is challenging but essential in applications with limited data and weak signals. Two types of domain knowledge are commonly available in scientific applications: 1. feature sparsity (fraction of features deemed relevant); 2. signal-to-noise ratio, quantified, for instance, as the proportion of variance explained (PVE). We show how to encode both types of domain knowledge into the widely used Gaussian scale mixture priors with Automatic Relevance Determination. Specifically, we propose a new joint prior over the local (i.e., feature-specific) scale parameters that encodes knowledge about feature sparsity, and a Stein gradient optimization to tune the hyperparameters in such a way that the distribution induced on the model's PVE matches the prior distribution. We show empirically that the new prior improves prediction accuracy, compared to existing neural network priors, on several publicly available datasets and in a genetics application where signals are weak and sparse, often outperforming even computationally intensive cross-validation for hyperparameter tuning.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源