论文标题
可扩展的贝叶斯转换的高斯工艺
Scalable Bayesian Transformed Gaussian Processes
论文作者
论文摘要
由Kedem和Oliviera提出的贝叶斯转化的高斯工艺(BTG)模型是扭曲高斯工艺(WGP)的完全贝叶斯对应物,并将其前面的超过输入翘曲和核超素体的人边缘化。这种完全贝叶斯对超参数的治疗通常会提供更准确的回归估计值和出色的不确定性传播,但价格昂贵。 BTG后验预测分布本身是通过高维整合估算的,必须倒置才能执行模型预测。为了使贝叶斯的方法实用,并且在速度上与最大似然估计(MLE)相当,我们提出了使用BTG计算的原则和快速技术。我们的框架使用双重稀疏的正交规则,紧密的分位数和等级的矩阵代数来启用快速模型预测和模型选择。这些可扩展的方法使我们能够对高维数据集进行回归,并将BTG应用于分层转换,从而大大提高其表现性。我们证明,BTG比基于MLE的模型实现了优越的经验性能。
The Bayesian transformed Gaussian process (BTG) model, proposed by Kedem and Oliviera, is a fully Bayesian counterpart to the warped Gaussian process (WGP) and marginalizes out a joint prior over input warping and kernel hyperparameters. This fully Bayesian treatment of hyperparameters often provides more accurate regression estimates and superior uncertainty propagation, but is prohibitively expensive. The BTG posterior predictive distribution, itself estimated through high-dimensional integration, must be inverted in order to perform model prediction. To make the Bayesian approach practical and comparable in speed to maximum-likelihood estimation (MLE), we propose principled and fast techniques for computing with BTG. Our framework uses doubly sparse quadrature rules, tight quantile bounds, and rank-one matrix algebra to enable both fast model prediction and model selection. These scalable methods allow us to regress over higher-dimensional datasets and apply BTG with layered transformations that greatly improve its expressibility. We demonstrate that BTG achieves superior empirical performance over MLE-based models.